409 research outputs found

    Monotonic Prefix Consistency in Distributed Systems

    Get PDF
    We study the issue of data consistency in distributed systems. Specifically, we consider a distributed system that replicates its data at multiple sites, which is prone to partitions, and which is assumed to be available (in the sense that queries are always eventually answered). In such a setting, strong consistency, where all replicas of the system apply synchronously every operation, is not possible to implement. However, many weaker consistency criteria that allow a greater number of behaviors than strong consistency, are implementable in available distributed systems. We focus on determining the strongest consistency criterion that can be implemented in a convergent and available distributed system that tolerates partitions. We focus on objects where the set of operations can be split into updates and queries. We show that no criterion stronger than Monotonic Prefix Consistency (MPC) can be implemented.Comment: Submitted pape

    On the nature of progress

    Get PDF
    15th International Conference, OPODIS 2011, Toulouse, France, December 13-16, 2011. ProceedingsWe identify a simple relationship that unifies seemingly unrelated progress conditions ranging from the deadlock-free and starvation-free properties common to lock-based systems, to non-blocking conditions such as obstruction-freedom, lock-freedom, and wait-freedom. Properties can be classified along two dimensions based on the demands they make on the operating system scheduler. A gap in the classification reveals a new non-blocking progress condition, weaker than obstruction-freedom, which we call clash-freedom. The classification provides an intuitively-appealing explanation why programmers continue to devise data structures that mix both blocking and non-blocking progress conditions. It also explains why the wait-free property is a natural basis for the consensus hierarchy: a theory of shared-memory computation requires an independent progress condition, not one that makes demands of the operating system scheduler

    An Epistemic Perspective on Consistency of Concurrent Computations

    Full text link
    Consistency properties of concurrent computations, e.g., sequential consistency, linearizability, or eventual consistency, are essential for devising correct concurrent algorithms. In this paper, we present a logical formalization of such consistency properties that is based on a standard logic of knowledge. Our formalization provides a declarative perspective on what is imposed by consistency requirements and provides some interesting unifying insight on differently looking properties

    Time-Efficient Read/Write Register in Crash-prone Asynchronous Message-Passing Systems

    Get PDF
    The atomic register is certainly the most basic object of computing science. Its implementation on top of an n-process asynchronous message-passing system has received a lot of attention. It has been shown that t \textless{} n/2 (where t is the maximal number of processes that may crash) is a necessary and sufficient requirement to build an atomic register on top of a crash-prone asynchronous message-passing system. Considering such a context, this paper visits the notion of a fast implementation of an atomic register, and presents a new time-efficient asynchronous algorithm. Its time-efficiency is measured according to two different underlying synchrony assumptions. Whatever this assumption, a write operation always costs a round-trip delay, while a read operation costs always a round-trip delay in favorable circumstances (intuitively, when it is not concurrent with a write). When designing this algorithm, the design spirit was to be as close as possible to the one of the famous ABD algorithm (proposed by Attiya, Bar-Noy, and Dolev)

    Fisheye Consistency: Keeping Data in Synch in a Georeplicated World

    Get PDF
    Over the last thirty years, numerous consistency conditions for replicated data have been proposed and implemented. Popular examples of such conditions include linearizability (or atomicity), sequential consistency, causal consistency, and eventual consistency. These consistency conditions are usually defined independently from the computing entities (nodes) that manipulate the replicated data; i.e., they do not take into account how computing entities might be linked to one another, or geographically distributed. To address this lack, as a first contribution, this paper introduces the notion of proximity graph between computing nodes. If two nodes are connected in this graph, their operations must satisfy a strong consistency condition, while the operations invoked by other nodes are allowed to satisfy a weaker condition. The second contribution is the use of such a graph to provide a generic approach to the hybridization of data consistency conditions into the same system. We illustrate this approach on sequential consistency and causal consistency, and present a model in which all data operations are causally consistent, while operations by neighboring processes in the proximity graph are sequentially consistent. The third contribution of the paper is the design and the proof of a distributed algorithm based on this proximity graph, which combines sequential consistency and causal consistency (the resulting condition is called fisheye consistency). In doing so the paper not only extends the domain of consistency conditions, but provides a generic provably correct solution of direct relevance to modern georeplicated systems

    Bounded Model Checking of Concurrent Data Types on Relaxed Memory Models: A Case Study

    Get PDF
    Many multithreaded programs employ concurrent data types to safely share data among threads. However, highly-concurrent algorithms for even seemingly simple data types are difficult to implement correctly, especially when considering the relaxed memory ordering models commonly employed by today’s multiprocessors. The formal verification of such implementations is challenging as well because the high degree of concurrency leads to a large number of possible executions. In this case study, we develop a SAT-based bounded verification method and apply it to a representative example, a well-known two-lock concurrent queue algorithm. We first formulate a correctness criterion that specifically targets failures caused by concurrency; it demands that all concurrent executions be observationally equivalent to some serial execution. Next, we define a relaxed memory model that conservatively approximates several common shared-memory multiprocessors. Using commit point specifications, a suite of finite symbolic tests, a prototype encoder, and a standard SAT solver, we successfully identify two failures of a naive implementation that can be observed only under relaxed memory models. We eliminate these failures by inserting appropriate memory ordering fences into the code. The experiments confirm that our approach provides a valuable aid for desigining and implementing concurrent data types

    On Correctness of Data Structures under Reads-Write Concurrency

    Get PDF
    Abstract. We study the correctness of shared data structures under reads-write concurrency. A popular approach to ensuring correctness of read-only operations in the presence of concurrent update, is read-set validation, which checks that all read variables have not changed since they were first read. In practice, this approach is often too conserva-tive, which adversely affects performance. In this paper, we introduce a new framework for reasoning about correctness of data structures under reads-write concurrency, which replaces validation of the entire read-set with more general criteria. Namely, instead of verifying that all read conditions over the shared variables, which we call base conditions. We show that reading values that satisfy some base condition at every point in time implies correctness of read-only operations executing in parallel with updates. Somewhat surprisingly, the resulting correctness guarantee is not equivalent to linearizability, and is instead captured through two new conditions: validity and regularity. Roughly speaking, the former re-quires that a read-only operation never reaches a state unreachable in a sequential execution; the latter generalizes Lamport’s notion of regular-ity for arbitrary data structures, and is weaker than linearizability. We further extend our framework to capture also linearizability. We illus-trate how our framework can be applied for reasoning about correctness of a variety of implementations of data structures such as linked lists.

    Algae–P relationships, thresholds, and frequency distributions guide nutrient criterion development

    Get PDF
    Abstract. We used complementary information collected using different conceptual approaches to develop recommendations for a stream nutrient criterion based on responses of algal assemblages to anthropogenic P enrichment. Benthic algal attributes, water chemistry, physical habitat, and human activities in watersheds were measured in streams of the Mid-Atlantic Highlands region as part of the Environmental Monitoring and Assessment Program of the US Environmental Protection Agency. Diatom species composition differed greatly between low-and high-pH reference streams; therefore, analyses for criterion development were limited to a subset of 149 well-buffered streams to control for natural variability among streams caused by pH. Regression models showed that TP concentrations were ;10 lg/L in streams with low levels of human activities in watersheds and that TP increased with % agriculture and urban land uses in watersheds. The 75 th percentile at reference sites was 12 lg TP/L. Chlorophyll a and ash-free dry mass increased and acid and alkaline phosphatase activities decreased with increasing TP concentration. The number of diatom taxa, evenness, proportion of expected native taxa, and number of high-P taxa increased with TP concentration in streams. In contrast, the number of low-P native taxa and % low-P individuals decreased with increasing TP. Lowess regression and regression tree analysis indicated nonlinear relationships for many diversity indices and attributes of taxonomic composition with respect to TP. Thresholds in these responses occurred between 10 and 20 lg/L and helped justify recommending a P criterion between 10 and 12 lg TP/L to protect highquality biological conditions in streams of the Mid-Atlantic Highlands

    Data consistency: toward a terminological clarification

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-21413-9_15Consistency is an inconsistency are ubiquitous term in data engineering. Its relevance to quality is obvious, since consistency is a commonplace dimension of data quality. However, connotations are vague or ambiguous. In this paper, we address semantic consistency, transaction consistency, replication consistency, eventual consistency and the new notion of partial consistency in databases. We characterize their distinguishing properties, and also address their differences, interactions and interdependencies. Partial consistency is an entry door to living with inconsistency, which is an ineludible necessity in the age of big data.Decker and F.D. Muñoz—supported by the Spanish MINECO grant TIN 2012-37719-C03-01.Decker, H.; Muñoz EscoĂ­, FD.; Misra, S. (2015). Data consistency: toward a terminological clarification. En Computational Science and Its Applications -- ICCSA 2015: 15th International Conference, Banff, AB, Canada, June 22-25, 2015, Proceedings, Part V. Springer International Publishing. 206-220. https://doi.org/10.1007/978-3-319-21413-9_15S206220Abadi, D.: Consistency tradeoffs in modern distributed database system design: Cap is only part of the story. Computer 45(2), 37–42 (2012)Bailis, P. (2015). http://www.bailis.org/blog/Bailis, P., Ghodsi, A.: Eventual consistency today: limitations, extensions, and beyond. ACM Queue, 11(3) (2013)Balegas, V., Duarte, S., Ferreira, C., Rodrigues, R., Preguica, N., Najafzadeh, M., Shapiro, M.: Putting consistency back into eventual consistency. In: 10th EuroSys. ACM (2015). http://dl.acm.org/citation.cfm?doid=2741948.2741972Beeri, C., Bernstein, P., Goodman, N.: A sophisticate’s introduction to database normalization theory. In: VLDB, pp. 113–124 (1978)Berenson, H., Bernstein, P., Gray, J., Melton, J., O’Neil, E., O’Neil, P.: A critique of ansi sql isolation levels. SIGMoD Record 24(2), 1–10 (1995)Bermbach, D., Tai, S.: Eventual consistency: how soon is eventual? In: 6th MW4SOC. ACM (2011)BernabĂ©-Gisbert, J., Muñoz-EscoĂ­, F.: Supporting multiple isolation levels in replicated environments. Data & Knowledge Engineering 7980, 1–16 (2012)Bernstein, P., Das, S.. Rethinking eventual consistency. In: SIGMOD 2013, pp. 923–928. ACM (2013)Bernstein, P., Hadzilacos, V., Goodman, N.: Concurrency Control and Recovery in Database Systems. Addison-Wesley (1987)Bertossi, L., Hunter, A., Schaub, T.: Inconsistency Tolerance. In: Bertossi, L., Hunter, A., Schaub, T. (eds.) Inconsistency Tolerance. LNCS, vol. 3300, pp. 1–14. Springer, Heidelberg (2005)Bobenrieth, A.: Inconsistencias por quĂ© no? Un estudio filosĂłfico sobre la lĂłgica paraconsistente. Premios Nacionales Colcultura. Tercer Mundo Editores. Magister Thesis, Universidad de los Andes, SantafĂ© de BogotĂĄ, Columbia (1995)Bosneag, A.-M., Brockmeyer, M.: A formal model for eventual consistency semantics. In: PDCS 2002, pp. 204–209. IASTED (2001)Browne, J.: Brewer’s cap theorem (2009). http://www.julianbrowne.com/article/viewer/brewers-cap-theoremCong, G., Fan, W., Geerts, F., Jia, X., Ma, S.: Improving data quality: consistency and accuracy. In: Proc. 33rd VLDB, pp. 315–326. ACM (2007)Dechter, R., van Beek, P.: Local and global relational consistency. Theor. Comput. Sci. 173(1), 283–308 (1997)Decker, H.: Translating advanced integrity checking technology to SQL. In: Doorn, J., Rivero, L. (eds.) Database integrity: challenges and solutions, pp. 203–249. Idea Group (2002)Decker, H.: Historical and computational aspects of paraconsistency in view of the logic foundation of databases. In: Bertossi, L., Katona, G.O.H., Schewe, K.-D., Thalheim, B. (eds.) Semantics in Databases 2001. LNCS, vol. 2582, pp. 63–81. Springer, Heidelberg (2003)Decker, H.: Answers that have integrity. In: Schewe, K.-D., Thalheim, B. (eds.) SDKB 2010. LNCS, vol. 6834, pp. 54–72. Springer, Heidelberg (2011)Decker, H.: New measures for maintaining the quality of databases. In: Murgante, B., Gervasi, O., Misra, S., Nedjah, N., Rocha, A.M.A.C., Taniar, D., Apduhan, B.O. (eds.) ICCSA 2012, Part IV. LNCS, vol. 7336, pp. 170–185. Springer, Heidelberg (2012)Decker, H.: A pragmatic approach to model, measure and maintain the quality of information in databases (2012). www.iti.upv.es/~hendrik/papers/ahrc-workshop_quality-of-data.pdf , www.iti.upv.es/~hendrik/papers/ahrc-workshop_quality-of-data_comments.pdf . Slides and comments presented at the Workshop on Information Quality. Univ, Hertfordshire, UKDecker, H.: Answers that have quality. In: Murgante, B., Misra, S., Carlini, M., Torre, C.M., Nguyen, H.-Q., Taniar, D., Apduhan, B.O., Gervasi, O. (eds.) ICCSA 2013, Part II. LNCS, vol. 7972, pp. 543–558. Springer, Heidelberg (2013)Decker, H.: Measure-based inconsistency-tolerant maintenance of database integrity. In: Schewe, K.-D., Thalheim, B. (eds.) SDKB 2013. LNCS, vol. 7693, pp. 149–173. Springer, Heidelberg (2013)Decker, H., Martinenghi, D.: Inconsistency-tolerant integrity checking. IEEE Transactions of Knowledge and Data Engineering 23(2), 218–234 (2011)Decker, H., Muñoz-EscoĂ­, F.D.: Revisiting and improving a result on integrity preservation by concurrent transactions. In: Meersman, R., Dillon, T., Herrero, P. (eds.) OTM 2010. LNCS, vol. 6428, pp. 297–306. Springer, Heidelberg (2010)Dong, X.L., Berti-Equille, L., Srivastava, D.: Data fusion: resolving conflicts from multiple sources (2015). http://arxiv.org/abs/1503.00310Eswaran, K., Gray, J., Lorie, R., Traiger, I.: The notions of consistency and predicate locks in a database system. CACM 19(11), 624–633 (1976)Muñoz-EscoĂ­, F.D., Ruiz-Fuertes, M.I., Decker, H., ArmendĂĄriz-ĂĂ±igo, J.E., de MendĂ­vil, J.R.G.: Extending middleware protocols for database replication with integrity support. In: Meersman, R., Tari, Z. (eds.) OTM 2008, Part I. LNCS, vol. 5331, pp. 607–624. Springer, Heidelberg (2008)Fekete, A.: Consistency models for replicated data. In: Encyclopedia of Database Systems, pp. 450–451. Springer (2009)Fekete, A., Gupta, D., Lynch, V., Luchangco, N., Shvartsman, A.: Eventually-serializable data services. In: 15th PoDC, pp. 300–309. ACM (1996)Gilbert, S., Lynch, N.: Brewer’s conjecture and the feasibility of consistent, available, partition-tolerant web services. SIGACT News 33(2), 51–59 (2002)Golab, W., Rahman, M., Auyoung, A., Keeton, K., Li, X.: Eventually consistent: Not what you were expecting? ACM Queue, 12(1) (2014)Grant, J., Hunter, A.: Measuring inconsistency in knowledgebases. Journal of Intelligent Information Systems 27(2), 159–184 (2006)Gray, J., Lorie, R., Putzolu, G., Traiger, I.: Granularity of locks and degrees of consistency in a shared data base. In: Nijssen, G. (ed.) Modelling in Data Base Management Systems. North Holland (1976)Haerder, T., Reuter, A.: Principles of transaction-oriented database recovery. Computing Surveys 15(4), 287–317 (1983)Herlihy, M., Wing, J.: Linearizability: a correctness condition for concurrent objects. TOPLAS 12(3), 463–492 (1990)R. Ho. Design pattern for eventual consistency (2009). http://horicky.blogspot.com.es/2009/01/design-pattern-for-eventual-consistency.htmlIkeda, R., Park, H., Widom, J.: Provenance for generalized map and reduce workflows. In: CIDR (2011)Kempster, T., Stirling, C., Thanisch, P.: Diluting acid. SIGMoD Record 28(4), 17–23 (1999)Li, X., Dong, X.L., Meng, W., Srivastava, D.: Truth finding on the deep web: Is the problem solved? VLDB Endowment 6(2), 97–108 (2012)Lloyd, W., Freedman, M., Kaminsky, M., Andersen, D.: Don’t settle for eventual: scalable causal consistency for wide-area storage with cops. In: 23rd SOPS, pp. 401–416 (2011)Lomet, D.: Transactions: from local atomicity to atomicity in the cloud. In: Jones, C.B., Lloyd, J.L. (eds.) Dependable and Historic Computing. LNCS, vol. 6875, pp. 38–52. Springer, Heidelberg (2011)Monge, P., Contractor, N.: Theory of Communication Networks. Oxford University Press (2003)Nicolas, J.-M.: Logic for improving integrity checking in relational data bases. Acta Informatica 18, 227–253 (1982)Muñoz-EscoĂ­, F.D., IrĂșn, L., H. Decker: Database replication protocols. In: Encyclopedia of Database Technologies and Applications, pp. 153–157. IGI Global (2005)Oracle: Constraints. http://docs.oracle.com/cd/B19306_01/server.102/b14223/constra.htm (May 1, 2015)Ouzzani, M., Medjahed, B., Elmagarmid, A.: Correctness criteria beyond serializability. In: Encyclopedia of Database Systems, pp. 501–506. Springer (2009)Rosenkrantz, D., Stearns, R., Lewis, P.: Consistency and serializability in concurrent datanbase systems. SIAM J. Comput. 13(3), 508–530 (1984)Saito, Y., Shapiro, M.: Optimistic replication. JACM 37(1), 42–81 (2005)Sandhu, R.: On five definitions of data integrity. In: Proc. IFIP WG11.3 Workshop on Database Security, pp. 257–267. North-Holland (1994)Simmons, G.: Contemporary Cryptology: The Science of Information Integrity. IEEE Press (1992)Sivathanu, G., Wright, C., Zadok, E.: Ensuring data integrity in storage: techniques and applications. In: Proc. 12th Conf. on Computer and Communications Security, p. 26. ACM (2005)Svanks, M.: Integrity analysis: Methods for automating data quality assurance. Information and Software Technology 30(10), 595–605 (1988)Technet, M.: Data integrity. https://technet.microsoft.com/en-us/library/aa933058 (May 1, 2015)Terry, D.: Replicated data consistency explained through baseball. Technical report, Microsoft. MSR Technical Report (2011)Traiger, I., Gray, J., Galtieri, C., Lindsay, B.: Transactions and consistency in distributed database systems. ACM Trans. Database Syst. 7(3), 323–342 (1982)Vidyasankar, K.: Serializability. In: Encyclopedia of Database Systems, pp. 2626–2632. Springer (2009)Vogels, W.: Eventually consistent (2007). http://www.allthingsdistributed.com/2007/12/eventually_consistent.html . Other versions in ACM Queue 6(6), 14–19. http://queue.acm.org/detail.cfm?id=1466448 (2008) and CACM 52(1), 40–44 (2009)Wikipedia: Consistency model. http://en.wikipedia.org/wiki/Consistency_model (May 1, 2015)Wikipedia: Data integrity. http://en.wikipedia.org/wiki/Data_integrity (May 1, 2015)Wikipedia: Data quality. http://en.wikipedia.org/wiki/Data_quality (May 1, 2015)Yin, X., Han, J., Yu, P.: Truth discovery with multiple conflicting information providers on the web. IEEE Transactions of Knowledge and Data Engineering 20(6), 796–808 (2008)Young, G.: Quick thoughts on eventual consistency (2010). http://codebetter.com/gregyoung/2010/04/14/quick-thoughts-on-eventual-consistency/ (May 1, 2015

    Defining and Verifying Durable Opacity: Correctness for Persistent Software Transactional Memory

    Full text link
    Non-volatile memory (NVM), aka persistent memory, is a new paradigm for memory that preserves its contents even after power loss. The expected ubiquity of NVM has stimulated interest in the design of novel concepts ensuring correctness of concurrent programming abstractions in the face of persistency. So far, this has lead to the design of a number of persistent concurrent data structures, built to satisfy an associated notion of correctness: durable linearizability. In this paper, we transfer the principle of durable concurrent correctness to the area of software transactional memory (STM). Software transactional memory algorithms allow for concurrent access to shared state. Like linearizability for concurrent data structures, opacity is the established notion of correctness for STMs. First, we provide a novel definition of durable opacity extending opacity to handle crashes and recovery in the context of NVM. Second, we develop a durably opaque version of an existing STM algorithm, namely the Transactional Mutex Lock (TML). Third, we design a proof technique for durable opacity based on refinement between TML and an operational characterisation of durable opacity by adapting the TMS2 specification. Finally, we apply this proof technique to show that the durable version of TML is indeed durably opaque. The correctness proof is mechanized within Isabelle.Comment: This is the full version of the paper that is to appear in FORTE 2020 (https://www.discotec.org/2020/forte
    • 

    corecore