14 research outputs found

    Supporting multiple isolation levels in replicated environments

    Full text link
    Replication is used by databases to implement reliability and provide scalability. However, achieving transparent replication is not an easy task. A replicated database is transparent if it can seamlessly replace a standard stand-alone database without requiring any changes to the components of the system. Database replication transparency can be achieved if: (a) replication protocols remain hidden for all other components of the system; and (b) the functionality of a stand-alone database is provided. The ability to simultaneously execute transactions under different isolation levels is a functionality offered by all stand-alone databases but not by their replicated counterparts. Allowing different isolation levels may improve overall system performance. For example, the TPC-C benchmark specification tolerates execution of some transactions at weaker isolation levels in order to increase throughput of committed transactions. In this paper, we show how replication protocols can be extended to enable transactions to be executed under different isolation levels. © 2012 Elsevier B.V. All rights reserved.This work has been supported by the Spanish Ministerio de Ciencia e Innovation (MICINN) and the European Regional Development Fund (ERDF/FEDER) under research grants TIN2009-14460-C03-01 and TIN2010-17193. The translation of this paper was funded by the Universitat Politecnica de Valencia, Spain.Bernabe Gisbert, JM.; Muñoz Escoí, FD. (2012). Supporting multiple isolation levels in replicated environments. Data and Knowledge Engineering. 79-80:1-16. doi:10.1016/j.datak.2012.05.001S11679-8

    Improving the scalability of cloud-based resilient database servers

    Get PDF
    Many rely now on public cloud infrastructure-as-a-service for database servers, mainly, by pushing the limits of existing pooling and replication software to operate large shared-nothing virtual server clusters. Yet, it is unclear whether this is still the best architectural choice, namely, when cloud infrastructure provides seamless virtual shared storage and bills clients on actual disk usage. This paper addresses this challenge with Resilient Asynchronous Commit (RAsC), an improvement to awell-known shared-nothing design based on the assumption that a much larger number of servers is required for scale than for resilience. Then we compare this proposal to other database server architectures using an analytical model focused on peak throughput and conclude that it provides the best performance/cost trade-off while at the same time addressing a wide range of fault scenarios

    Data consistency: toward a terminological clarification

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-21413-9_15Consistency is an inconsistency are ubiquitous term in data engineering. Its relevance to quality is obvious, since consistency is a commonplace dimension of data quality. However, connotations are vague or ambiguous. In this paper, we address semantic consistency, transaction consistency, replication consistency, eventual consistency and the new notion of partial consistency in databases. We characterize their distinguishing properties, and also address their differences, interactions and interdependencies. Partial consistency is an entry door to living with inconsistency, which is an ineludible necessity in the age of big data.Decker and F.D. Muñoz—supported by the Spanish MINECO grant TIN 2012-37719-C03-01.Decker, H.; Muñoz Escoí, FD.; Misra, S. (2015). Data consistency: toward a terminological clarification. En Computational Science and Its Applications -- ICCSA 2015: 15th International Conference, Banff, AB, Canada, June 22-25, 2015, Proceedings, Part V. Springer International Publishing. 206-220. https://doi.org/10.1007/978-3-319-21413-9_15S206220Abadi, D.: Consistency tradeoffs in modern distributed database system design: Cap is only part of the story. Computer 45(2), 37–42 (2012)Bailis, P. (2015). http://www.bailis.org/blog/Bailis, P., Ghodsi, A.: Eventual consistency today: limitations, extensions, and beyond. ACM Queue, 11(3) (2013)Balegas, V., Duarte, S., Ferreira, C., Rodrigues, R., Preguica, N., Najafzadeh, M., Shapiro, M.: Putting consistency back into eventual consistency. In: 10th EuroSys. ACM (2015). http://dl.acm.org/citation.cfm?doid=2741948.2741972Beeri, C., Bernstein, P., Goodman, N.: A sophisticate’s introduction to database normalization theory. In: VLDB, pp. 113–124 (1978)Berenson, H., Bernstein, P., Gray, J., Melton, J., O’Neil, E., O’Neil, P.: A critique of ansi sql isolation levels. SIGMoD Record 24(2), 1–10 (1995)Bermbach, D., Tai, S.: Eventual consistency: how soon is eventual? In: 6th MW4SOC. ACM (2011)Bernabé-Gisbert, J., Muñoz-Escoí, F.: Supporting multiple isolation levels in replicated environments. Data & Knowledge Engineering 7980, 1–16 (2012)Bernstein, P., Das, S.. Rethinking eventual consistency. In: SIGMOD 2013, pp. 923–928. ACM (2013)Bernstein, P., Hadzilacos, V., Goodman, N.: Concurrency Control and Recovery in Database Systems. Addison-Wesley (1987)Bertossi, L., Hunter, A., Schaub, T.: Inconsistency Tolerance. In: Bertossi, L., Hunter, A., Schaub, T. (eds.) Inconsistency Tolerance. LNCS, vol. 3300, pp. 1–14. Springer, Heidelberg (2005)Bobenrieth, A.: Inconsistencias por qué no? Un estudio filosófico sobre la lógica paraconsistente. Premios Nacionales Colcultura. Tercer Mundo Editores. Magister Thesis, Universidad de los Andes, Santafé de Bogotá, Columbia (1995)Bosneag, A.-M., Brockmeyer, M.: A formal model for eventual consistency semantics. In: PDCS 2002, pp. 204–209. IASTED (2001)Browne, J.: Brewer’s cap theorem (2009). http://www.julianbrowne.com/article/viewer/brewers-cap-theoremCong, G., Fan, W., Geerts, F., Jia, X., Ma, S.: Improving data quality: consistency and accuracy. In: Proc. 33rd VLDB, pp. 315–326. ACM (2007)Dechter, R., van Beek, P.: Local and global relational consistency. Theor. Comput. Sci. 173(1), 283–308 (1997)Decker, H.: Translating advanced integrity checking technology to SQL. In: Doorn, J., Rivero, L. (eds.) Database integrity: challenges and solutions, pp. 203–249. Idea Group (2002)Decker, H.: Historical and computational aspects of paraconsistency in view of the logic foundation of databases. In: Bertossi, L., Katona, G.O.H., Schewe, K.-D., Thalheim, B. (eds.) Semantics in Databases 2001. LNCS, vol. 2582, pp. 63–81. Springer, Heidelberg (2003)Decker, H.: Answers that have integrity. In: Schewe, K.-D., Thalheim, B. (eds.) SDKB 2010. LNCS, vol. 6834, pp. 54–72. Springer, Heidelberg (2011)Decker, H.: New measures for maintaining the quality of databases. In: Murgante, B., Gervasi, O., Misra, S., Nedjah, N., Rocha, A.M.A.C., Taniar, D., Apduhan, B.O. (eds.) ICCSA 2012, Part IV. LNCS, vol. 7336, pp. 170–185. Springer, Heidelberg (2012)Decker, H.: A pragmatic approach to model, measure and maintain the quality of information in databases (2012). www.iti.upv.es/~hendrik/papers/ahrc-workshop_quality-of-data.pdf , www.iti.upv.es/~hendrik/papers/ahrc-workshop_quality-of-data_comments.pdf . Slides and comments presented at the Workshop on Information Quality. Univ, Hertfordshire, UKDecker, H.: Answers that have quality. In: Murgante, B., Misra, S., Carlini, M., Torre, C.M., Nguyen, H.-Q., Taniar, D., Apduhan, B.O., Gervasi, O. (eds.) ICCSA 2013, Part II. LNCS, vol. 7972, pp. 543–558. Springer, Heidelberg (2013)Decker, H.: Measure-based inconsistency-tolerant maintenance of database integrity. In: Schewe, K.-D., Thalheim, B. (eds.) SDKB 2013. LNCS, vol. 7693, pp. 149–173. Springer, Heidelberg (2013)Decker, H., Martinenghi, D.: Inconsistency-tolerant integrity checking. IEEE Transactions of Knowledge and Data Engineering 23(2), 218–234 (2011)Decker, H., Muñoz-Escoí, F.D.: Revisiting and improving a result on integrity preservation by concurrent transactions. In: Meersman, R., Dillon, T., Herrero, P. (eds.) OTM 2010. LNCS, vol. 6428, pp. 297–306. Springer, Heidelberg (2010)Dong, X.L., Berti-Equille, L., Srivastava, D.: Data fusion: resolving conflicts from multiple sources (2015). http://arxiv.org/abs/1503.00310Eswaran, K., Gray, J., Lorie, R., Traiger, I.: The notions of consistency and predicate locks in a database system. CACM 19(11), 624–633 (1976)Muñoz-Escoí, F.D., Ruiz-Fuertes, M.I., Decker, H., Armendáriz-Íñigo, J.E., de Mendívil, J.R.G.: Extending middleware protocols for database replication with integrity support. In: Meersman, R., Tari, Z. (eds.) OTM 2008, Part I. LNCS, vol. 5331, pp. 607–624. Springer, Heidelberg (2008)Fekete, A.: Consistency models for replicated data. In: Encyclopedia of Database Systems, pp. 450–451. Springer (2009)Fekete, A., Gupta, D., Lynch, V., Luchangco, N., Shvartsman, A.: Eventually-serializable data services. In: 15th PoDC, pp. 300–309. ACM (1996)Gilbert, S., Lynch, N.: Brewer’s conjecture and the feasibility of consistent, available, partition-tolerant web services. SIGACT News 33(2), 51–59 (2002)Golab, W., Rahman, M., Auyoung, A., Keeton, K., Li, X.: Eventually consistent: Not what you were expecting? ACM Queue, 12(1) (2014)Grant, J., Hunter, A.: Measuring inconsistency in knowledgebases. Journal of Intelligent Information Systems 27(2), 159–184 (2006)Gray, J., Lorie, R., Putzolu, G., Traiger, I.: Granularity of locks and degrees of consistency in a shared data base. In: Nijssen, G. (ed.) Modelling in Data Base Management Systems. North Holland (1976)Haerder, T., Reuter, A.: Principles of transaction-oriented database recovery. Computing Surveys 15(4), 287–317 (1983)Herlihy, M., Wing, J.: Linearizability: a correctness condition for concurrent objects. TOPLAS 12(3), 463–492 (1990)R. Ho. Design pattern for eventual consistency (2009). http://horicky.blogspot.com.es/2009/01/design-pattern-for-eventual-consistency.htmlIkeda, R., Park, H., Widom, J.: Provenance for generalized map and reduce workflows. In: CIDR (2011)Kempster, T., Stirling, C., Thanisch, P.: Diluting acid. SIGMoD Record 28(4), 17–23 (1999)Li, X., Dong, X.L., Meng, W., Srivastava, D.: Truth finding on the deep web: Is the problem solved? VLDB Endowment 6(2), 97–108 (2012)Lloyd, W., Freedman, M., Kaminsky, M., Andersen, D.: Don’t settle for eventual: scalable causal consistency for wide-area storage with cops. In: 23rd SOPS, pp. 401–416 (2011)Lomet, D.: Transactions: from local atomicity to atomicity in the cloud. In: Jones, C.B., Lloyd, J.L. (eds.) Dependable and Historic Computing. LNCS, vol. 6875, pp. 38–52. Springer, Heidelberg (2011)Monge, P., Contractor, N.: Theory of Communication Networks. Oxford University Press (2003)Nicolas, J.-M.: Logic for improving integrity checking in relational data bases. Acta Informatica 18, 227–253 (1982)Muñoz-Escoí, F.D., Irún, L., H. Decker: Database replication protocols. In: Encyclopedia of Database Technologies and Applications, pp. 153–157. IGI Global (2005)Oracle: Constraints. http://docs.oracle.com/cd/B19306_01/server.102/b14223/constra.htm (May 1, 2015)Ouzzani, M., Medjahed, B., Elmagarmid, A.: Correctness criteria beyond serializability. In: Encyclopedia of Database Systems, pp. 501–506. Springer (2009)Rosenkrantz, D., Stearns, R., Lewis, P.: Consistency and serializability in concurrent datanbase systems. SIAM J. Comput. 13(3), 508–530 (1984)Saito, Y., Shapiro, M.: Optimistic replication. JACM 37(1), 42–81 (2005)Sandhu, R.: On five definitions of data integrity. In: Proc. IFIP WG11.3 Workshop on Database Security, pp. 257–267. North-Holland (1994)Simmons, G.: Contemporary Cryptology: The Science of Information Integrity. IEEE Press (1992)Sivathanu, G., Wright, C., Zadok, E.: Ensuring data integrity in storage: techniques and applications. In: Proc. 12th Conf. on Computer and Communications Security, p. 26. ACM (2005)Svanks, M.: Integrity analysis: Methods for automating data quality assurance. Information and Software Technology 30(10), 595–605 (1988)Technet, M.: Data integrity. https://technet.microsoft.com/en-us/library/aa933058 (May 1, 2015)Terry, D.: Replicated data consistency explained through baseball. Technical report, Microsoft. MSR Technical Report (2011)Traiger, I., Gray, J., Galtieri, C., Lindsay, B.: Transactions and consistency in distributed database systems. ACM Trans. Database Syst. 7(3), 323–342 (1982)Vidyasankar, K.: Serializability. In: Encyclopedia of Database Systems, pp. 2626–2632. Springer (2009)Vogels, W.: Eventually consistent (2007). http://www.allthingsdistributed.com/2007/12/eventually_consistent.html . Other versions in ACM Queue 6(6), 14–19. http://queue.acm.org/detail.cfm?id=1466448 (2008) and CACM 52(1), 40–44 (2009)Wikipedia: Consistency model. http://en.wikipedia.org/wiki/Consistency_model (May 1, 2015)Wikipedia: Data integrity. http://en.wikipedia.org/wiki/Data_integrity (May 1, 2015)Wikipedia: Data quality. http://en.wikipedia.org/wiki/Data_quality (May 1, 2015)Yin, X., Han, J., Yu, P.: Truth discovery with multiple conflicting information providers on the web. IEEE Transactions of Knowledge and Data Engineering 20(6), 796–808 (2008)Young, G.: Quick thoughts on eventual consistency (2010). http://codebetter.com/gregyoung/2010/04/14/quick-thoughts-on-eventual-consistency/ (May 1, 2015

    Iron Behaving Badly: Inappropriate Iron Chelation as a Major Contributor to the Aetiology of Vascular and Other Progressive Inflammatory and Degenerative Diseases

    Get PDF
    The production of peroxide and superoxide is an inevitable consequence of aerobic metabolism, and while these particular "reactive oxygen species" (ROSs) can exhibit a number of biological effects, they are not of themselves excessively reactive and thus they are not especially damaging at physiological concentrations. However, their reactions with poorly liganded iron species can lead to the catalytic production of the very reactive and dangerous hydroxyl radical, which is exceptionally damaging, and a major cause of chronic inflammation. We review the considerable and wide-ranging evidence for the involvement of this combination of (su)peroxide and poorly liganded iron in a large number of physiological and indeed pathological processes and inflammatory disorders, especially those involving the progressive degradation of cellular and organismal performance. These diseases share a great many similarities and thus might be considered to have a common cause (i.e. iron-catalysed free radical and especially hydroxyl radical generation). The studies reviewed include those focused on a series of cardiovascular, metabolic and neurological diseases, where iron can be found at the sites of plaques and lesions, as well as studies showing the significance of iron to aging and longevity. The effective chelation of iron by natural or synthetic ligands is thus of major physiological (and potentially therapeutic) importance. As systems properties, we need to recognise that physiological observables have multiple molecular causes, and studying them in isolation leads to inconsistent patterns of apparent causality when it is the simultaneous combination of multiple factors that is responsible. This explains, for instance, the decidedly mixed effects of antioxidants that have been observed, etc...Comment: 159 pages, including 9 Figs and 2184 reference

    Josep M. Bernabé-Gisbert et al.: Extending Mixed Serialisation Graphs to Replicated Environments TR-ITI-ITE-07/20 Extending Mixed Serialisation Graphs to Replicated Environments

    No full text
    A Database Management System normally deals with a heterogeneous set of transactions which do not necessarily need the same isolation guarantees if executed concurrently. Centralised DBMSs can manage this kind of situations since they normally use locks and every transaction implicitly requests the necessary locks to ensure its isolation needs. Nevertheless, in replicated environments this issue is not solved since the most used replication schemes can not be easily adapted to such a heterogeneous environment as in centralised ones. In fact, it is even hard to prove whether a replication protocol is ensuring every transaction isolation guarantees unless only one isolation level at a time is supported. In this document we extend Adya’s Mixed Serialisation Graphs with more isolation levels and apply them to replicated environments to be able to know when a given replication protocol ensures every transaction guarantees.

    SIRC, a Multiple Isolation Level Protocol for Middleware-based Data Replication

    No full text
    Abstract—One of the weaknesses of database replication protocols, compared to centralized DBMSs, is that they are unable to manage concurrent execution of transactions at different isolation levels. In the last years, some theoretical works related to this research line have appeared but none of them has proposed and implemented a real replication protocol with support to multiple isolation levels. This paper takes advantage of our MADIS middleware and one of its implemented Snapshot Isolation protocols to design and implement SIRC, a protocol that is able to execute concurrently both Generalized Snapshot Isolation (GSI) and Generalized Loose Read Committed (GLRC) transactions. We have also made a performance analysis to show how this kind of protocols can improve the system performance and decrease the transaction abortion rate in applications that do not require the strictest isolation level in every transaction. I

    Providing read committed isolation level in non-blocking ROWA database replication protocols

    No full text
    Abstract. Total order ROWA strategies are widely used in replicated protocol design. Normally, these protocols provide higher isolation guarantees like Serialisable or Snapshot Isolation but, for some applications, a more permissive isolation level like Read Committed fits better. Some centralised database management systems provide Read Committed as a default isolation level but in replicated systems it is rare to find proposals and systems supporting it. In this paper we extend the notion of Read Committed to a replicated environment, giving the necessary theoretical background to construct Read Committed ROWA database replication protocols.

    Abstract SIRC-Rep, a Multiple Isolation Level Protocol for Middleware-based Data Replication Architectures ∗

    No full text
    One of the weaknesses of replicated protocols, compared to centralized ones, is that they are unable to manage concurrent execution of transactions at different isolation levels. In the last years, some theoretical works related to this research line have appeared but none of them has proposed and implemented a real replication protocol with support to multiple isolation levels. This paper takes advantage of our MADIS middleware and one of its implemented Snapshot Isolation protocols to design and implement the SIRC-Rep, a protocol that is able to execute concurrently both Generalized Snapshot Isolation (SI) and Read Committed (RC) transactions. We have also made a performance analysis to show how this kind of protocols can improve the system performance and decrease the transaction abortion rate in applications that do not require the strictest isolation level in every transaction. ∗ This work has been partially supported by the Spanis
    corecore