8 research outputs found

    Inconsistency-tolerant business rules in distributed information systems

    Full text link
    The final publication is available at Springer via http://10.1007/978-3-642-41033-8_41Business rules enhance the integrity of information systems. However, their maintenance does not scale up easily to distributed systems with concurrent transactions. To a large extent, that is due to two problematic exigencies: the postulates of total and isolated business rule satisfaction. For overcoming these problems, we outline a measure-based inconsistency-tolerant approach to business rules maintenance.Supported by ERDF/FEDER and MEC grants TIN2009-14460-C03, TIN2010-17139, TIN2012-37719-C03-01.Decker, H.; Muñoz Escoí, FD. (2013). Inconsistency-tolerant business rules in distributed information systems. En On the Move to Meaningful Internet Systems: OTM 2013 Workshops. Springer Verlag (Germany). 8186:322-331. https://doi.org/10.1007/978-3-642-41033-8_41S3223318186Abiteboul, S., Hull, R., Vianu, V.: Foundations of Databases. Addison-Wesley (1995)Berenson, H., Bernstein, P., Gray, J., Melton, J., O’Neil, E., O’Neil, P.: A critique of ANSI SQL isolation levels. In: Proc. SIGMOD 1995, pp. 1–10. ACM Press (1995)Bernstein, P., Hadzilacos, V., Goodman, N.: Concurrency Control and Recovery in Database Systems. Addison-Wesley (1987)Butleris, R., Kapocius, K.: The Business Rules Repository for Information Systems Design. In: Proc. 6th ADBIS, vol. 2, pp. 64–77. Slovak Univ. of Technology, Bratislava (2002)Davis, C.T.: Data Processing sphere of control. IBM Systems Journal 17(2), 179–198 (1978)Decker, H.: Partial Repairs that Tolerante Inconsistency. In: Eder, J., Bielikova, M., Tjoa, A.M. (eds.) ADBIS 2011. LNCS, vol. 6909, pp. 389–400. Springer, Heidelberg (2011)Decker, H.: Causes of the violation of integrity constraints for supporting the quality of databases. In: Murgante, B., Gervasi, O., Iglesias, A., Taniar, D., Apduhan, B.O. (eds.) ICCSA 2011, Part V. LNCS, vol. 6786, pp. 283–292. Springer, Heidelberg (2011)Decker, H.: New measures for maintaining the quality of databases. In: Murgante, B., Gervasi, O., Misra, S., Nedjah, N., Rocha, A.M.A.C., Taniar, D., Apduhan, B.O. (eds.) ICCSA 2012, Part IV. LNCS, vol. 7336, pp. 170–185. Springer, Heidelberg (2012)Decker, H.: Controlling the Consistency of the Evolution of Database Systems. In: Proc. 24th ICSSEA, Paris (2012)Decker, H., Martinenghi, D.: Inconsistency-tolerant Integrity Checking. IEEE Transactions on Knowledge and Data Engineering 23(2), 218–234 (2011)Decker, H., Muñoz-Escoí, F.D.: Revisiting and Improving a Result on Integrity Preservation by Concurrent Transactions. In: Meersman, R., Dillon, T., Herrero, P. (eds.) OTM 2010 Workshops. LNCS, vol. 6428, pp. 297–306. Springer, Heidelberg (2010)Eswaran, K., Gray, J., Lorie, R., Traiger, I.: The Notions of Consistency and Predicate Locks in a Database System. CACM 19(11), 624–633 (1976)Gilbert, S., Lynch, N.: Brewer’s Conjecture and the feasibility of Consistent, Available, Partition-tolerant Web Services. ACM SIGACT News 33(2), 51–59 (2002)Ibrahim, H.: Checking Integrity Constraints - How it Differs in Centralized, Distributed and Parallel Databases. In: Proc. 17th DEXA Workshops, pp. 563–568. IEEE (2006)Lynch, N., Blaustein, B., Siegel, M.: Correctness Conditions for Highly Available Replicated Databases. In: Proc. 5th PODC, pp. 11–28. ACM Press (1986)Martinenghi, D., Christiansen, H.: Transaction Management with Integrity Checking. In: Andersen, K.V., Debenham, J., Wagner, R. (eds.) DEXA 2005. LNCS, vol. 3588, pp. 606–615. Springer, Heidelberg (2005)Christiansen, H., Decker, H.: Integrity checking and maintenance in relational and deductive databases and beyond. In: Ma, Z. (ed.) Intelligent Databases: Technologies and Applications, pp. 238–285. Idea Group (2006)Morgan, T.: Business Rules and Information Systems - Aligning IT with Business Goals. Addison-Wesley (2002)Muñoz-Escoí, F.D., Ruiz-Fuertes, M.I., Decker, H., Armendáriz-Íñigo, J.E., de Mendívil, J.R.G.: Extending Middleware Protocols for Database Replication with Integrity Support. In: Meersman, R., Tari, Z. (eds.) OTM 2008, Part I. LNCS, vol. 5331, pp. 607–624. Springer, Heidelberg (2008)Nicolas, J.-M.: Logic for improving integrity checking in relational data bases. Acta Informatica 18, 227–253 (1982)Novakovic, I., Deletic, V.: Structuring of Business Rules in Information System Design and Architecture. Facta Universitatis Nis, Ser. Elec. Energ. 22(3), 305–312 (2009)Pipino, L., Lee, Y., Yang, R.: Data Quality Assessment. CACM 45(4), 211–218 (2002)Stonebraker, M.: Errors in Database Systems, Eventual Consistency, and the CAP Theorem (2010), http://cacm.acm.org/blog/blog-cacm/83396-errors-in-database-systems-eventual-consistency-and-the-cap-theoremStonebraker, M.: In search of database consistency. CACM 53(10), 8–9 (2010)Stonebraker, M.: Technical perspective - One size fits all: an idea whose time has come and gone. Commun. ACM 51(12), 76 (2008)Taveter, K.: Business Rules’ Approach to the Modelling, Design and Implementation of Agent-Oriented Information Systems. In: Proc. CAiSE workshop AOIS, Heidelberg (1999)Vidyasankar, K.: Serializability. In: Liu, L., Özu, T. (eds.) Encyclopedia of Database Systems, pp. 2626–2632. Springer (2009)Weikum, G., Vossen, G.: Transactional Information Systems. Morgan Kaufmann (2002)Vogels, W.: Eventually Consistent. ACM Queue 6(6), 14–19 (2008)Pereira Ziwich, P., Procpio Duarte, E., Pessoa Albini, L.: Distributed Integrity Checking for Systems with Replicated Data. In: Proc. ICPADS, vol. 1, pp. 363–369. IEEE CSP (2005

    ¿Qué características tienen los esquemas NOSQL?

    Get PDF
    RESUMENActualmente existen muchos tipos de diferentes bases de datos, entre ellas las bases de daros relacionales, también se han diseñado numerosas arquitecturas para el manejo de distintas tipologías de datos. Desde 1970 el modelo relacional se ha implantado en casi todas las bases de datos; con el inicio de una nueva era los desarrolladores notaron que sus datos diferían de la estructura del común modelo relacional e implementaron nuevas ideas o arquitecturas, para que esos datos no se vieran afectados por el modelo relacional. Este trabajo pretende dar a conocer las principales características de los sistemas de bases de datos NoSQL, discutir modelos de representación de datos así como establecer sus ventajas y desventajas frente a otros modelos

    PERFORMANCE EVALUATION OF M-PDDRA ALGORITHM UNDER PRIORITIZED TRAFFIC

    Get PDF
    Database replication is a process, which keeps the multiple copies of the same data at a different geographical location, thus allowing data security and reduces load and access delay. In the recent past PDDRA, a pre-fetching based dynamic data replication algorithm was proposed and later on further modifications are done to increase throughput and reduce latency. In previous work, the performance of the algorithms is measured for random traffic arrival process for both local and global network. The performance is also evaluated under load balancing condition. However, an important feature in request generation is a priority of the request, which needs to be addressed. This work considers the priority of generated requests in request loss analysis, and thus it lifts the assumption of the previous algorithm which assumes that member of the virtual organization has similar interest. The performance evaluation of the algorithm is performed using Monte-Carlo simulation

    NewSQL-tietokannat

    Get PDF
    Tiivistelmä. Tämä kandidaatin tutkielma käsittelee NewSQL-tietokantoja. Tutkielmassa perehdytään siihen, kuinka NewSQL-tietokannat kykenevät takaamaan ACID-transaktiot. Tämä selvitetään tutkimalla erilaisia NewSQL-tietokantoja. Valitsin tämän aiheen henkilökohtaisen mielenkiinnon vuoksi. Lisäksi aihe on tuore ja mahdollisesti tulevaisuudessa merkittävä. Käsiteltävän tiedon määrä on nopeassa kasvussa, joten yritykset tarvitsevat luotettavia ratkaisuja tallentaa ja hallinnoida suurta määrää tietoa. Varsinkin koneoppiminen vaatii suuren datamäärän käsittelyä. Tällä hetkellä on tärkeää varmistaa NewSQL-tietokantojen luotettavuus. Tutkimuskysymyksen vastausta lähestytään erilaisten NewSQL-tietokantaratkaisujen avulla. Tällaisia ovat esimerkiksi VoltDB ja MemSQL, joiden ratkaisut ACIDtransaktioiden takaamiseen esitellään tässä tutkielmassa pääpiirteisesti. NewSQL-tietokantojen kehitys on ollut jatkumo jo 1960-luvulta lähtien alkaen relaatiotietokannoista. Käsiteltävän tiedon kasvun takia tietokantoja on jouduttu laajentamaan ja hajauttamaan, mikä on aiheuttanut haasteita tietokantojen luotettavuuteen. Tämän takia tutkielmassa käydään myös läpi, millaisia vaikutuksia tietokantojen hajautuksella on ollut niiden luotettavuuteen

    Scalable Uncertainty-tolerant Business Rules

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-07617-1_16Business rules are of key importance for maintaining the correctness of business processes and the reliability of business data. When they take the form of integrity constraints, business rules also can help to contain the amount of uncertainty associated to business data and decisions based on those data. However, business rule enforcement may not scale up easily to systems with concurrent transactions. To a large extent, the problem is due to two common exigencies: the postulates of total and of isolated business rule satisfaction. In order to limit the accumulation of business rule violations, and thus of uncertainty, we are going to outline how a measure-based uncertainty-tolerant approach to business rules maintenance scales up to concurrent transactions. The scale-up is achieved by refraining from the postulates of total and isolated business rule satisfaction.Supported by ERDF/FEDER and the MEC grant TIN2012-37719-C03-01.Cuzzocrea, A.; Decker, H.; Muñoz-Escoí, FD. (2014). Scalable Uncertainty-tolerant Business Rules. En Hybrid Artificial Intelligence Systems: 9th International Conference, HAIS 2014, Salamanca, Spain, June 11-13, 2014. Proceedings. Springer Verlag (Germany). 179-190. https://doi.org/10.1007/978-3-319-07617-1_16S179190Abiteboul, S., Hull, R., Vianu, V.: Foundations of Databases. Addison-Wesley (1995)Abraham, A.: Hybrid approaches for approximate reasoning. Journal of Intelligent and Fuzzy Systems 23(2-3), 41–42 (2012)Bayer, R.: Integrity, concurrency, and recovery in databases. In: Samelson, K. (ed.) ECI 1976. LNCS, vol. 44, pp. 79–106. Springer, Heidelberg (1976)Berenson, H., Bernstein, P.A., Gray, J., Melton, J., O’Neil, E.J., O’Neil, P.E.: A critique of ansi sql isolation levels. In: SIGMOD Conference, pp. 1–10 (1995)Bernstein, P.A., Hadzilacos, V., Goodman, N.: Concurrency Control and Recovery in Database Systems. Addison-Wesley (1987)Cuzzocrea, A.: Optimization issues of querying and evolving sensor and stream databases. Information Systems 39, 196–198 (2014)Cuzzocrea, A., Leung, C.K.-S., Tanbeer, S.K.: Mining of Diverse Social Entities from Linked Data. In: Selçuk Candan, K., Amer-Yahia, S., Schweikardt, N., Christophides, V., Leroy, V. (eds.) Proc. Workshops of the EDBT/ICDT 2014 Joint Conference. CEUR Workshop Proceedings, pp. 269–274 (2014)Cuzzocrea, A., de Juan Marín, R., Decker, H., Muñoz-Escoí, F.D.: Managing uncertainty in databases and scaling it up to concurrent transactions. In: Hüllermeier, E., Link, S., Fober, T., Seeger, B. (eds.) SUM 2012. LNCS, vol. 7520, pp. 30–43. Springer, Heidelberg (2012)Date, C.J.: What not how: the business rules approach to application development. Addison-Wesley Longman Publishing Co., Inc., Boston (2000)Decan, A., Pijcke, F., Wijsen, J.: Certain conjunctive query answering in SQL. In: Hüllermeier, E., Link, S., Fober, T., Seeger, B. (eds.) SUM 2012. LNCS, vol. 7520, pp. 154–167. Springer, Heidelberg (2012)Decker, H.: Causes for inconsistency-tolerant schema update management. In: ICDE Workshops, pp. 157–161 (2011)Decker, H.: Causes of the violation of integrity constraints for supporting the quality of databases. In: Murgante, B., Gervasi, O., Iglesias, A., Taniar, D., Apduhan, B.O. (eds.) ICCSA 2011, Part V. LNCS, vol. 6786, pp. 283–292. Springer, Heidelberg (2011)Decker, H.: Partial repairs that tolerate inconsistency. In: Eder, J., Bielikova, M., Tjoa, A.M. (eds.) ADBIS 2011. LNCS, vol. 6909, pp. 389–400. Springer, Heidelberg (2011)Decker, H.: Answers that have quality. In: Murgante, B., Misra, S., Carlini, M., Torre, C.M., Nguyen, H.-Q., Taniar, D., Apduhan, B.O., Gervasi, O. (eds.) ICCSA 2013, Part II. LNCS, vol. 7972, pp. 543–558. Springer, Heidelberg (2013)Decker, H., Martinenghi, D.: Inconsistency-tolerant integrity checking. IEEE Trans. Knowl. Data Eng. 23(2), 218–234 (2011)Decker, H., Muñoz-Escoí, F.D.: Revisiting and improving a result on integrity preservation by concurrent transactions. In: Meersman, R., Dillon, T., Herrero, P. (eds.) OTM 2010. LNCS, vol. 6428, pp. 297–306. Springer, Heidelberg (2010)Eswaran, K.P., Chamberlin, D.D.: Functional specifications of subsystem for database integrity. In: Kerr, D.S. (ed.) Proceedings of the International Conference on Very Large Data Bases, Framingham, Massachusetts, USA, September 22-24, pp. 48–68. ACM (1975)Eswaran, K.P., Gray, J., Lorie, R.A., Traiger, I.L.: The notions of consistency and predicate locks in a database system. Commun. ACM 19(11), 624–633 (1976)Gardarin, G.: Integrity, consistency, concurrency, reliability in distributed database management systems. In: Distributed Databases, pp. 335–351 (1980)Gilbert, S., Lynch, N.A.: Brewer’s conjecture and the feasibility of consistent, available, partition-tolerant web services. SIGACT News 33(2), 51–59 (2002)Gray, J., Lorie, R., Putzolu, G.: Granularity of locks in a shared data base. In: 1st International Conference on Very Large Data Bases, pp. 428–451. ACM Press (1975)Grefen, P.W.P.J.: Combining theory and practice in integrity control: A declarative approach to the specification of a transaction modification subsystem. In: Agrawal, R., Baker, S., Bell, D.A. (eds.) 19th International Conference on Very Large Data Bases, Dublin, Ireland, August 24-27, pp. 581–591. Morgan Kaufmann (1993)Hammer, M., McLeod, D.: Semantic integrity in a relational data base system. In: 1st International Conference on Very Large Data Bases, pp. 25–47. ACM Press (1975)Ibrahim, H.: Checking integrity constraints - how it differs in centralized, distributed and parallel databases. In: DEXA Workshops, pp. 563–568 (2006)Lynch, N.A., Blaustein, B.T., Siegel, M.: Correctness conditions for highly available replicated databases. In: PODC, pp. 11–28 (1986)Martinenghi, D., Christiansen, H., Decker, H.: Integrity checking and maintenance in relational and deductive databases and beyond. In: Ma, Z. (ed.) Intelligent Databases: Technologies and Applications, pp. 238–285. Idea Group Publishing (2006)Morgan, T.: Business Rules and Information Systems: Aligning IT with Business Goals (Unisys Series). Addison-Wesley Professional (2002)Muñoz-Escoí, F.D., Ruiz-Fuertes, M.I., Decker, H., Armendáriz-Íñigo, J.E., de Mendívil, J.R.G.: Extending middleware protocols for database replication with integrity support. In: Meersman, R., Tari, Z. (eds.) OTM 2008, Part I. LNCS, vol. 5331, pp. 607–624. Springer, Heidelberg (2008)Nicolas, J.-M.: Logic for improving integrity checking in relational data bases. Acta Informatica (18), 227–253 (1982)Pipino, L., Lee, Y., Yang, R.: Data quality assessment. Commun. ACM 45(4), 211–218 (2002)Ross, R.G.: Business Rule Concepts: Getting to the Point of Knowledge, 2nd edn. (1998)Stonebraker, M.: Technical perspective - one size fits all: an idea whose time has come and gone. Commun. ACM 51(12), 76 (2008)Stonebraker, M.: Errors in Database Systems, Eventual Consistency, and the CAP Theorem (2010)Stonebraker, M.: In search of database consistency. Commun. ACM 53(10), 8–9 (2010)Vidyasankar, K.: Serializability. In: Encyclopedia of Database Systems, pp. 2626–2632 (2009)Vogels, W.: Eventually consistent. Commun. ACM 52(1), 40–44 (2009)Weikum, G.: Where’s the Data in the Big Data Wave? ACM SIGMOD Blog (2013), http://wp.sigmod.org/?p=786Weikum, G., Vossen, G.: Transactional Information Systems: Theory, Algorithms, and the Practice of Concurrency Control and Recovery. Morgan Kaufmann (2002)Ziwich, R.P., Duarte Jr., E.P., Albini, L.C.P.: Distributed integrity checking for systems with replicated data. In: ICPADS (1), pp. 363–369 (2005

    Laajan mittakaavan Internet-sovelluksia varten kehitetyt hajautetut tietokannat

    Get PDF
    Suurten Internet-yritysten, kuten Googlen ja Amazonin tarjoamat palvelut edellyttävät valtavien hajautettujen tietomäärien käsittelyä ja varastoimista. Tiedon pitää olla hyvin saatavilla. Tietokantajärjestelmältä edellytetään myös hyvää suorituskykyä. Suorituskyvyn ylläpitämiseksi järjestelmän täytyy skaalautua niin, että tarpeen vaatiessa järjestelmään voidaan lisätä enemmän resursseja. Tietokannan rakenteen tulee olla lisäksi joustava ja helposti muokattavissa. Perinteiset relaatiotietokannat transaktionaalisine oikeellisuus- ja eristyvyysvaatimuksineen ovat olleet liian rajoittavia tähän tarkoitukseen, joten näiden laajan mittakaavan Internet-sovellusten vaatimuksiin on kehitetty muita vaihtoehtoja. Näitä järjestelmiä on alettu kutsua NoSQL-tietokantajärjestelmiksi. NoSQL-tietokannat ovat usein niin erikoistuneita, ettei relaatiomallia ja SQL-kyselykielen koko ilmaisuvoimaa tarvita tai voida käyttää. Näiden tietokantojen tietomalli perustuu avain-arvo-pariin, jossa varastoitu arvo on yksilöity indeksoitavan avaimen perusteella. Tietokannan skeema on taas usein hyvin joustava, tai tietokanta saattaa olla jopa kokonaan skeematon. Käytössä olevat funktiot ovatkin usein rajoittuneet yksittäisten avain-arvo-parien lukemiseen ja päivittämiseen. Näiden tietojen laajan mittakaavan rinnakkaiseen laskentaan on lisäksi kehitetty yksinkertainen MapReduce-ohjelmointiparadigma. Google ja Amazon hyödyntävät näitä järjestelmiä varten rakentamaansa laajan mittakaavan infrastruktuuria tarjoamalla sitä myös muiden yritysten sovelluksien alustaksi NoSQL-tietokantapalveluna. Tässä tutkielmassa pyritään selventämään NoSQL-tietokantajärjestelmien tallennusratkaisun ja tiedon käsittelyn periaatteita, eroja relaatiotietokantajärjestelmiin sekä millaiseen käyttöön nämä uudet tietokantajärjestelmät oikein soveltuvat. Tutkielmassa esitellään myös MapReduce-ohjelmointiparadigma, NoSQL-tietokantapalveluna sekä joitakin NoSQL-tietokantajärjestelmien luokittelutapoja ja tietokannan tietomalleja. Tutkielma perustuu pääosin aikaisemmin aiheesta laadittuun kirjalliseen materiaaliin, kuten lehti- ja konferenssiartikkeleihin sekä kirjoihin. NoSQL-tietokantajärjestelmien nykyistä kehitysvaihetta voidaan verrata aikaan ennen SQL:ää. Nämä järjestelmät ovat kovin heterogeeninen joukko, joten myös niiden luokittelu on vaikeaa. NoSQL-tietokantajärjestelmissä ei ole perinteisten relaatiotietokantajärjestelmien pitkälle kehitettyjä ominaisuuksia. Suurin osa edellä mainituista ominaisuuksista pitää toteuttaa sovelluslogiikassa, joten ne jäävät sovellusohjelmoijan vastuulle. Mikään tietokantajärjestelmä tai työkalu ei ole paras ratkaisu kaikkiin tehtäviin. Kussakin järjestelmässä on järkevää ja tehokasta käsitellä ja varastoida pääosin tietyn kaltaista sovellusalueen tietoa. Sopiva tietokantajärjestelmä tai työkalu riippuu täysin yrityksen ja sovelluksen vaatimuksista. Yrityksen tulee siis arvioida sovellusalueen tietojen vaatimuksia

    Linear Scalability of Distributed Applications

    Get PDF
    The explosion of social applications such as Facebook, LinkedIn and Twitter, of electronic commerce with companies like Amazon.com and Ebay.com, and of Internet search has created the need for new technologies and appropriate systems to manage effectively a considerable amount of data and users. These applications must run continuously every day of the year and must be capable of surviving sudden and abrupt load increases as well as all kinds of software, hardware, human and organizational failures. Increasing (or decreasing) the allocated resources of a distributed application in an elastic and scalable manner, while satisfying requirements on availability and performance in a cost-effective way, is essential for the commercial viability but it poses great challenges in today's infrastructures. Indeed, Cloud Computing can provide resources on demand: it now becomes easy to start dozens of servers in parallel (computational resources) or to store a huge amount of data (storage resources), even for a very limited period, paying only for the resources consumed. However, these complex infrastructures consisting of heterogeneous and low-cost resources are failure-prone. Also, although cloud resources are deemed to be virtually unlimited, only adequate resource management and demand multiplexing can meet customer requirements and avoid performance deteriorations. In this thesis, we deal with adaptive management of cloud resources under specific application requirements. First, in the intra-cloud environment, we address the problem of cloud storage resource management with availability guarantees and find the optimal resource allocation in a decentralized way by means of a virtual economy. Data replicas migrate, replicate or delete themselves according to their economic fitness. Our approach responds effectively to sudden load increases or failures and makes best use of the geographical distance between nodes to improve application-specific data availability. We then propose a decentralized approach for adaptive management of computational resources for applications requiring high availability and performance guarantees under load spikes, sudden failures or cloud resource updates. Our approach involves a virtual economy among service components (similar to the one among data replicas) and an innovative cascading scheme for setting up the performance goals of individual components so as to meet the overall application requirements. Our approach manages to meet application requirements with the minimum resources, by allocating new ones or releasing redundant ones. Finally, as cloud storage vendors offer online services at different rates, which can vary widely due to second-degree price discrimination, we present an inter-cloud storage resource allocation method to aggregate resources from different storage vendors and provide to the user a system which guarantees the best rate to host and serve its data, while satisfying the user requirements on availability, durability, latency, etc. Our system continuously optimizes the placement of data according to its type and usage pattern, and minimizes migration costs from one provider to another, thereby avoiding vendor lock-in

    In search of database consistency

    No full text
    corecore