2,192 research outputs found

    Eventual Consistency: Origin and Support

    Get PDF
    Eventual consistency is demanded nowadays in geo-replicated services that need to be highly scalable and available. According to the CAP constraints, when network partitions may arise, a distributed service should choose between being strongly consistent or being highly available. Since scalable services should be available, a relaxed consistency (while the network is partitioned) is the preferred choice. Eventual consistency is not a common data-centric consistency model, but only a state convergence condition to be added to a relaxed consistency model. There are still several aspects of eventual consistency that have not been analysed in depth in previous works: 1. which are the oldest replication proposals providing eventual consistency, 2. which replica consistency models provide the best basis for building eventually consistent services, 3. which mechanisms should be considered for implementing an eventually consistent service, and 4. which are the best combinations of those mechanisms for achieving different concrete goals. This paper provides some notes on these important topics

    Data consistency: toward a terminological clarification

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-21413-9_15Consistency is an inconsistency are ubiquitous term in data engineering. Its relevance to quality is obvious, since consistency is a commonplace dimension of data quality. However, connotations are vague or ambiguous. In this paper, we address semantic consistency, transaction consistency, replication consistency, eventual consistency and the new notion of partial consistency in databases. We characterize their distinguishing properties, and also address their differences, interactions and interdependencies. Partial consistency is an entry door to living with inconsistency, which is an ineludible necessity in the age of big data.Decker and F.D. Muñoz—supported by the Spanish MINECO grant TIN 2012-37719-C03-01.Decker, H.; Muñoz Escoí, FD.; Misra, S. (2015). Data consistency: toward a terminological clarification. En Computational Science and Its Applications -- ICCSA 2015: 15th International Conference, Banff, AB, Canada, June 22-25, 2015, Proceedings, Part V. Springer International Publishing. 206-220. https://doi.org/10.1007/978-3-319-21413-9_15S206220Abadi, D.: Consistency tradeoffs in modern distributed database system design: Cap is only part of the story. Computer 45(2), 37–42 (2012)Bailis, P. (2015). http://www.bailis.org/blog/Bailis, P., Ghodsi, A.: Eventual consistency today: limitations, extensions, and beyond. ACM Queue, 11(3) (2013)Balegas, V., Duarte, S., Ferreira, C., Rodrigues, R., Preguica, N., Najafzadeh, M., Shapiro, M.: Putting consistency back into eventual consistency. In: 10th EuroSys. ACM (2015). http://dl.acm.org/citation.cfm?doid=2741948.2741972Beeri, C., Bernstein, P., Goodman, N.: A sophisticate’s introduction to database normalization theory. In: VLDB, pp. 113–124 (1978)Berenson, H., Bernstein, P., Gray, J., Melton, J., O’Neil, E., O’Neil, P.: A critique of ansi sql isolation levels. SIGMoD Record 24(2), 1–10 (1995)Bermbach, D., Tai, S.: Eventual consistency: how soon is eventual? In: 6th MW4SOC. ACM (2011)Bernabé-Gisbert, J., Muñoz-Escoí, F.: Supporting multiple isolation levels in replicated environments. Data & Knowledge Engineering 7980, 1–16 (2012)Bernstein, P., Das, S.. Rethinking eventual consistency. In: SIGMOD 2013, pp. 923–928. ACM (2013)Bernstein, P., Hadzilacos, V., Goodman, N.: Concurrency Control and Recovery in Database Systems. Addison-Wesley (1987)Bertossi, L., Hunter, A., Schaub, T.: Inconsistency Tolerance. In: Bertossi, L., Hunter, A., Schaub, T. (eds.) Inconsistency Tolerance. LNCS, vol. 3300, pp. 1–14. Springer, Heidelberg (2005)Bobenrieth, A.: Inconsistencias por qué no? Un estudio filosófico sobre la lógica paraconsistente. Premios Nacionales Colcultura. Tercer Mundo Editores. Magister Thesis, Universidad de los Andes, Santafé de Bogotá, Columbia (1995)Bosneag, A.-M., Brockmeyer, M.: A formal model for eventual consistency semantics. In: PDCS 2002, pp. 204–209. IASTED (2001)Browne, J.: Brewer’s cap theorem (2009). http://www.julianbrowne.com/article/viewer/brewers-cap-theoremCong, G., Fan, W., Geerts, F., Jia, X., Ma, S.: Improving data quality: consistency and accuracy. In: Proc. 33rd VLDB, pp. 315–326. ACM (2007)Dechter, R., van Beek, P.: Local and global relational consistency. Theor. Comput. Sci. 173(1), 283–308 (1997)Decker, H.: Translating advanced integrity checking technology to SQL. In: Doorn, J., Rivero, L. (eds.) Database integrity: challenges and solutions, pp. 203–249. Idea Group (2002)Decker, H.: Historical and computational aspects of paraconsistency in view of the logic foundation of databases. In: Bertossi, L., Katona, G.O.H., Schewe, K.-D., Thalheim, B. (eds.) Semantics in Databases 2001. LNCS, vol. 2582, pp. 63–81. Springer, Heidelberg (2003)Decker, H.: Answers that have integrity. In: Schewe, K.-D., Thalheim, B. (eds.) SDKB 2010. LNCS, vol. 6834, pp. 54–72. Springer, Heidelberg (2011)Decker, H.: New measures for maintaining the quality of databases. In: Murgante, B., Gervasi, O., Misra, S., Nedjah, N., Rocha, A.M.A.C., Taniar, D., Apduhan, B.O. (eds.) ICCSA 2012, Part IV. LNCS, vol. 7336, pp. 170–185. Springer, Heidelberg (2012)Decker, H.: A pragmatic approach to model, measure and maintain the quality of information in databases (2012). www.iti.upv.es/~hendrik/papers/ahrc-workshop_quality-of-data.pdf , www.iti.upv.es/~hendrik/papers/ahrc-workshop_quality-of-data_comments.pdf . Slides and comments presented at the Workshop on Information Quality. Univ, Hertfordshire, UKDecker, H.: Answers that have quality. In: Murgante, B., Misra, S., Carlini, M., Torre, C.M., Nguyen, H.-Q., Taniar, D., Apduhan, B.O., Gervasi, O. (eds.) ICCSA 2013, Part II. LNCS, vol. 7972, pp. 543–558. Springer, Heidelberg (2013)Decker, H.: Measure-based inconsistency-tolerant maintenance of database integrity. In: Schewe, K.-D., Thalheim, B. (eds.) SDKB 2013. LNCS, vol. 7693, pp. 149–173. Springer, Heidelberg (2013)Decker, H., Martinenghi, D.: Inconsistency-tolerant integrity checking. IEEE Transactions of Knowledge and Data Engineering 23(2), 218–234 (2011)Decker, H., Muñoz-Escoí, F.D.: Revisiting and improving a result on integrity preservation by concurrent transactions. In: Meersman, R., Dillon, T., Herrero, P. (eds.) OTM 2010. LNCS, vol. 6428, pp. 297–306. Springer, Heidelberg (2010)Dong, X.L., Berti-Equille, L., Srivastava, D.: Data fusion: resolving conflicts from multiple sources (2015). http://arxiv.org/abs/1503.00310Eswaran, K., Gray, J., Lorie, R., Traiger, I.: The notions of consistency and predicate locks in a database system. CACM 19(11), 624–633 (1976)Muñoz-Escoí, F.D., Ruiz-Fuertes, M.I., Decker, H., Armendáriz-Íñigo, J.E., de Mendívil, J.R.G.: Extending middleware protocols for database replication with integrity support. In: Meersman, R., Tari, Z. (eds.) OTM 2008, Part I. LNCS, vol. 5331, pp. 607–624. Springer, Heidelberg (2008)Fekete, A.: Consistency models for replicated data. In: Encyclopedia of Database Systems, pp. 450–451. Springer (2009)Fekete, A., Gupta, D., Lynch, V., Luchangco, N., Shvartsman, A.: Eventually-serializable data services. In: 15th PoDC, pp. 300–309. ACM (1996)Gilbert, S., Lynch, N.: Brewer’s conjecture and the feasibility of consistent, available, partition-tolerant web services. SIGACT News 33(2), 51–59 (2002)Golab, W., Rahman, M., Auyoung, A., Keeton, K., Li, X.: Eventually consistent: Not what you were expecting? ACM Queue, 12(1) (2014)Grant, J., Hunter, A.: Measuring inconsistency in knowledgebases. Journal of Intelligent Information Systems 27(2), 159–184 (2006)Gray, J., Lorie, R., Putzolu, G., Traiger, I.: Granularity of locks and degrees of consistency in a shared data base. In: Nijssen, G. (ed.) Modelling in Data Base Management Systems. North Holland (1976)Haerder, T., Reuter, A.: Principles of transaction-oriented database recovery. Computing Surveys 15(4), 287–317 (1983)Herlihy, M., Wing, J.: Linearizability: a correctness condition for concurrent objects. TOPLAS 12(3), 463–492 (1990)R. Ho. Design pattern for eventual consistency (2009). http://horicky.blogspot.com.es/2009/01/design-pattern-for-eventual-consistency.htmlIkeda, R., Park, H., Widom, J.: Provenance for generalized map and reduce workflows. In: CIDR (2011)Kempster, T., Stirling, C., Thanisch, P.: Diluting acid. SIGMoD Record 28(4), 17–23 (1999)Li, X., Dong, X.L., Meng, W., Srivastava, D.: Truth finding on the deep web: Is the problem solved? VLDB Endowment 6(2), 97–108 (2012)Lloyd, W., Freedman, M., Kaminsky, M., Andersen, D.: Don’t settle for eventual: scalable causal consistency for wide-area storage with cops. In: 23rd SOPS, pp. 401–416 (2011)Lomet, D.: Transactions: from local atomicity to atomicity in the cloud. In: Jones, C.B., Lloyd, J.L. (eds.) Dependable and Historic Computing. LNCS, vol. 6875, pp. 38–52. Springer, Heidelberg (2011)Monge, P., Contractor, N.: Theory of Communication Networks. Oxford University Press (2003)Nicolas, J.-M.: Logic for improving integrity checking in relational data bases. Acta Informatica 18, 227–253 (1982)Muñoz-Escoí, F.D., Irún, L., H. Decker: Database replication protocols. In: Encyclopedia of Database Technologies and Applications, pp. 153–157. IGI Global (2005)Oracle: Constraints. http://docs.oracle.com/cd/B19306_01/server.102/b14223/constra.htm (May 1, 2015)Ouzzani, M., Medjahed, B., Elmagarmid, A.: Correctness criteria beyond serializability. In: Encyclopedia of Database Systems, pp. 501–506. Springer (2009)Rosenkrantz, D., Stearns, R., Lewis, P.: Consistency and serializability in concurrent datanbase systems. SIAM J. Comput. 13(3), 508–530 (1984)Saito, Y., Shapiro, M.: Optimistic replication. JACM 37(1), 42–81 (2005)Sandhu, R.: On five definitions of data integrity. In: Proc. IFIP WG11.3 Workshop on Database Security, pp. 257–267. North-Holland (1994)Simmons, G.: Contemporary Cryptology: The Science of Information Integrity. IEEE Press (1992)Sivathanu, G., Wright, C., Zadok, E.: Ensuring data integrity in storage: techniques and applications. In: Proc. 12th Conf. on Computer and Communications Security, p. 26. ACM (2005)Svanks, M.: Integrity analysis: Methods for automating data quality assurance. Information and Software Technology 30(10), 595–605 (1988)Technet, M.: Data integrity. https://technet.microsoft.com/en-us/library/aa933058 (May 1, 2015)Terry, D.: Replicated data consistency explained through baseball. Technical report, Microsoft. MSR Technical Report (2011)Traiger, I., Gray, J., Galtieri, C., Lindsay, B.: Transactions and consistency in distributed database systems. ACM Trans. Database Syst. 7(3), 323–342 (1982)Vidyasankar, K.: Serializability. In: Encyclopedia of Database Systems, pp. 2626–2632. Springer (2009)Vogels, W.: Eventually consistent (2007). http://www.allthingsdistributed.com/2007/12/eventually_consistent.html . Other versions in ACM Queue 6(6), 14–19. http://queue.acm.org/detail.cfm?id=1466448 (2008) and CACM 52(1), 40–44 (2009)Wikipedia: Consistency model. http://en.wikipedia.org/wiki/Consistency_model (May 1, 2015)Wikipedia: Data integrity. http://en.wikipedia.org/wiki/Data_integrity (May 1, 2015)Wikipedia: Data quality. http://en.wikipedia.org/wiki/Data_quality (May 1, 2015)Yin, X., Han, J., Yu, P.: Truth discovery with multiple conflicting information providers on the web. IEEE Transactions of Knowledge and Data Engineering 20(6), 796–808 (2008)Young, G.: Quick thoughts on eventual consistency (2010). http://codebetter.com/gregyoung/2010/04/14/quick-thoughts-on-eventual-consistency/ (May 1, 2015

    Better Admission Control and Disk Scheduling for Multimedia Applications

    Get PDF
    General purpose operating systems have been designed to provide fast, loss-free disk service to all applications. However, multimedia applications are capable of tolerating some data loss, but are very sensitive to variation in disk service timing. Present research efforts to handle multimedia applications assume pessimistic disk behaviour when deciding to admit new multimedia connections so as not to violate the real-time application constraints. However, since multimedia applications are ``soft\u27 real-time applications that can tolerate some loss, we propose an optimistic scheme for admission control which uses average case values for disk access. Typically, disk scheduling mechanisms for multimedia applications reduce disk access times by only trying to minimize movement to subsequent blocks after sequencing based on Earliest Deadline First. We propose to implement a disk scheduling algorithm that uses knowledge of the media stored and permissible loss and jitter for each client, in addition to the physical parameters used by the other scheduling algorithms. We will evaluate our approach by implementing our admission control policy and disk scheduling algorithm in Linux and measuring the quality of various multimedia streams. If successful, the contributions of this thesis are the development of new admission control and flexible disk scheduling algorithm for improved multimedia quality of service

    A Study of Mongodb and Oracle in an E-Commerce Environment

    Get PDF
    As worldwide e-commerce expands, businesses continue to look for better ways to meet their evolving needs with web solutions that scale and perform adequately. Several online retailers have been able to address scaling challenges through the implementation of NoSQL databases. While architecturally different from their relational database counterparts, NoSQL databases typically achieve performance gains by relaxing one or more of the essential transaction processing attributes of atomicity, consistency, isolation, and durability (ACID). As with any emerging technology, there are both critics and supporters of NoSQL databases. The detractors claim that NoSQL is not safe and is at a greater risk for data loss. On the other hand, its ardent defenders boast the performance gains achieved over their relational counterparts. This thesis studies the NoSQL database known as MongoDB, and discusses its ability to support the growing needs of e-commerce data processing. It then examines MongoDB\u27s raw performance (compared to Oracle 11g R2, a relational database) and discusses its adherence to ACID

    Dynamic re-optimization techniques for stream processing engines and object stores

    Get PDF
    Large scale data storage and processing systems are strongly motivated by the need to store and analyze massive datasets. The complexity of a large class of these systems is rooted in their distributed nature, extreme scale, need for real-time response, and streaming nature. The use of these systems on multi-tenant, cloud environments with potential resource interference necessitates fine-grained monitoring and control. In this dissertation, we present efficient, dynamic techniques for re-optimizing stream-processing systems and transactional object-storage systems.^ In the context of stream-processing systems, we present VAYU, a per-topology controller. VAYU uses novel methods and protocols for dynamic, network-aware tuple-routing in the dataflow. We show that the feedback-driven controller in VAYU helps achieve high pipeline throughput over long execution periods, as it dynamically detects and diagnoses any pipeline-bottlenecks. We present novel heuristics to optimize overlays for group communication operations in the streaming model.^ In the context of object-storage systems, we present M-Lock, a novel lock-localization service for distributed transaction protocols on scale-out object stores to increase transaction throughput. Lock localization refers to dynamic migration and partitioning of locks across nodes in the scale-out store to reduce cross-partition acquisition of locks. The service leverages the observed object-access patterns to achieve lock-clustering and deliver high performance. We also present TransMR, a framework that uses distributed, transactional object stores to orchestrate and execute asynchronous components in amorphous data-parallel applications on scale-out architectures

    Data modeling with NoSQL : how, when and why

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 201

    Lifenet: a flexible ad hoc networking solution for transient environments

    Get PDF
    In the wake of major disasters, the failure of existing communications infrastructure and the subsequent lack of an effective communication solution results in increased risks, inefficiencies, damage and casualties. Currently available options such as satellite communication are expensive and have limited functionality. A robust communication solution should be affordable, easy to deploy, require little infrastructure, consume little power and facilitate Internet access. Researchers have long proposed the use of ad hoc wireless networks for such scenarios. However such networks have so far failed to create any impact, primarily because they are unable to handle network transience and have usability constraints such as static topologies and dependence on specific platforms. LifeNet is a WiFi-based ad hoc data communication solution designed for use in highly transient environments. After presenting the motivation, design principles and key insights from prior literature, the dissertation introduces a new routing metric called Reachability and a new routing protocol based on it, called Flexible Routing. Roughly speaking, reachability measures the end-to-end multi-path probability that a packet transmitted by a source reaches its final destination. Using experimental results, it is shown that even with high transience, the reachability metric - (1) accurately captures the effects of transience (2) provides a compact and eventually consistent global network view at individual nodes, (3) is easy to calculate and maintain and (4) captures availability. Flexible Routing trades throughput for availability and fault-tolerance and ensures successful packet delivery under varying degrees of transience. With the intent of deploying LifeNet on field we have been continuously interacting with field partners, one of which is Tata Institute of Social Sciences India. We have refined LifeNet iteratively refined base on their feedback. I conclude the thesis with lessons learned from our field trips so far and deployment plans for the near future.MSCommittee Chair: Santosh Vempala; Committee Member: Ashok Jhunjhunwala; Committee Member: Michael Best; Committee Member: Nick Feamste

    A Stochastic Model of Plausibility in Live-Virtual-Constructive Environments

    Get PDF
    Distributed live-virtual-constructive simulation promises a number of benefits for the test and evaluation community, including reduced costs, access to simulations of limited availability assets, the ability to conduct large-scale multi-service test events, and recapitalization of existing simulation investments. However, geographically distributed systems are subject to fundamental state consistency limitations that make assessing the data quality of live-virtual-constructive experiments difficult. This research presents a data quality model based on the notion of plausible interaction outcomes. This model explicitly accounts for the lack of absolute state consistency in distributed real-time systems and offers system designers a means of estimating data quality and fitness for purpose. Experiments with World of Warcraft player trace data validate the plausibility model and exceedance probability estimates. Additional experiments with synthetic data illustrate the model\u27s use in ensuring fitness for purpose of live-virtual-constructive simulations and estimating the quality of data obtained from live-virtual-constructive experiments
    • …
    corecore