15 research outputs found

    Integrity constraints in logic databases

    Get PDF
    AbstractWe consider logic databases as logic programs and suggest how to deal with the problem of integrity constraint checking. Two methods for integrity constraint handling are presented. The first one is based on a metalevel consistency proof and is particularly suitable for an existing database which has to be checked for some integrity constraints. The second method is based on a transformation of the logic program which represents the database into a logic program which satisfies the given integrity constraints. This method is specially suggested for databases that have to be built specifying, separately, which are the deductive rules and the facts and which are the integrity constraints on a specific relation. Different tools providing for the two mechanisms are proposed for a flexible logic database management system

    Restriccions d'integritat temporals en bases de dades deductives bitemporals

    Get PDF
    The aim of this report is to introduce a taxonomy of temporal integrity constraints, focused on the bitemporal deductive database area, to get a better understanding of why they are required, their behavior and the best way to define them using first order logic. To meet these goals, we have analyzed temporal integrity constraints taxonomies existing on the temporal database area and deeply related areas as multiversion databases. Thus, the mentioned legacy work has been adapted and developed to cover the scope of the bitemporal deductive databases.Postprint (published version

    A three-valued logic for Inductive Logic Programming

    Get PDF
    Inductive Logic Programming (ILP) is closely related to Logic Programming (LP) by the name. We extract the basic differences of ILP and LP by comparing both and give definitions of the basic assumptions of their paradigms, e.g. closed world assumption, the open domain assumption and the open world assumption used in ILP. The paper is written in English

    Perspectives in deductive databases

    Get PDF
    AbstractI discuss my experiences, some of the work that I have done, and related work that influenced me, concerning deductive databases, over the last 30 years. I divide this time period into three roughly equal parts: 1957–1968, 1969–1978, 1979–present. For the first I describe how my interest started in deductive databases in 1957, at a time when the field of databases did not even exist. I describe work in the beginning years, leading to the start of deductive databases about 1968 with the work of Cordell Green and Bertram Raphael. The second period saw a great deal of work in theorem providing as well as the introduction of logic programming. The existence and importance of deductive databases as a formal and viable discipline received its impetus at a workshop held in Toulouse, France, in 1977, which culminated in the book Logic and Data Bases. The relationship of deductive databases and logic programming was recognized at that time. During the third period we have seen formal theories of databases come about as an outgrowth of that work, and the recognition that artificial intelligence and deductive databases are closely related, at least through the so-called expert database systems. I expect that the relationships between techniques from formal logic, databases, logic programming, and artificial intelligence will continue to be explored and the field of deductive databases will become a more prominent area of computer science in coming years

    On Softening OCL Invariants

    Get PDF
    Invariants play a crucial role in system development. This contribution focuses on invariants in systems with so-called occurrence uncertainty, where we are interested in deciding whether a certain population (a set of instances of a class model) of the system satisfies an invariant or not, but we are unsure about the actual occurrence of the elements of that population, and also about the degree of satisfaction that is actually required for the invariant to be fulfilled. Invariants are soft in the sense that they are required to hold only for a particular, and a priori uncertain, percentage of the population. The contribution proposes a systematic approach to occurrence uncertainty and a prototypical implementation for models with uncertainty and soft invariants allowing to build system states and to make experiments with them.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Proyectos TIN2014-52034-R y PGC2018-094905-B-I00

    Actualització consistent de bases de dades deductives

    Get PDF
    En aquesta tesi, proposem un nou mètode per a l'actualització consistent de bases de dades deductives. Donada una petició d'actualització, aquest mètode tradueix de forma automàtica aquesta petició en el conjunt de totes les possibles formes d'actualitzar la base de dades extensional de forma que la petició sigui satisfeta i que no es violi cap restricció d'integritat. Aquest nostre mètode està basat en un conjunt de regles que defineixen la diferència entre dos estats consecutius de la base de dades. Aquesta diferència es determina definint explícitament les insercions, esborrats i les modificacions que es poden induir com a conseqüència de l'aplicació d'una actualització a la base de dades. El mètode està basat en una extensió del procediment de resolució SLDNF. Sigui D una base de dades deductiva, A(D) la base de dades augmentada associada, U una petició inicial d'actualització i T un conjunt d'actualitzacions de fets bàsics. Direm que el conjunt T satisfà la petició d'actualització U i no viola cap restricció d'integritat de D si, utilitzant la resolució SLDNF, l'objectiu  U  ¬Ic té èxit amb el conjunt d'entrada A(D)  T. Així doncs, el mètode consistirà en fer tenir èxit a les derivacions SLDNF fracassades. Per a fer-ho, s'inclouran al conjunt T aquelles actualitzacions de fets bàsics que cal realitzar per tal de que la derivació assoleixi l'èxit. Les diferent formes com es pot assolir aquest èxit es corresponen a les diferents solucions a la petició d'actualització U. El mètode proposat es demostra que és correcte i complet. En aquest sentit, es garanteix que donada una petició d'actualització U, el mètode obté totes les possibles formes de satisfer aquesta petició i que, a la vegada, se satisfacin les restriccions d'integritat definides a la base de dades. A diferència d'altres mètodes, el nostre gestiona les modificacions de fets com un nou tipus d'actualització bàsic. Aquest nou tipus d'actualització, junt amb la demostració de correctesa i completesa, és una de les principals aportacions del nostre mètode respecte els mètodes apareguts recentment. La segona gran aportació del nostre mètode és el fet d'utilitzar tècniques per a millorar l'eficiència del procés de traducció de vistes i del procés de manteniment de restriccions d'integritat. Per a millorar l'eficiència del procés de manteniment de restriccions d'integritat, proposem una tècnica per a determinar l'ordre en què cal comprovar les restriccions d'integritat. Aquesta tècnica està basada en la generació en temps de compilació del anomenat Graf de Precedències, el qual estableix les relacions entre violadors i reparadors potencials d'aquestes restriccions. Aquest Graf és utilitzat en temps d'execució per a determinar l'ordre en què es comproven i reparen les restriccions d'integritat. Aquest ordre redueix el nombre de vegades que cada restricció d'integritat ha de ser comprovada (i reparada) després de reparar qualsevol altre restricció. Per a millorar l'eficiència del procés d'actualització de vistes, proposem fer una anàlisi de la petició d'actualització, del contingut de la base de dades i de les regles de la base de dades augmentada abans d'iniciar la traducció de la petició d'actualització U. Aquesta anàlisi té com a objectiu el minimitzar el nombre d'accessos al contingut de base de dades que cal realitzar per a traduir la petició d'actualització, i per altra banda, aquesta anàlisi també ha de permetre determinar quines alternatives no podran donar lloc a una traducció vàlida a la petició U, permetent així, considerar únicament aquelles alternatives que sí proporcionaran una traducció vàlida a U.Deductive databases generalize relational databases by including not only base facts and integrity constraints, but also deductive rules. Several problems may arise when a deductive database is updated. The problems that are addressed in this thesis are those of integrity maintenance and view updating. Integrity maintenance is aimed to ensure that, after a database update, integrity constraints remain satisfied. When these integrity constraints are violated by some update, such violations must be repaired by performing additional updates. The second problem we deal with is view updating. In a deductive database, derived facts are not explicitly stored into the database and they are deduced from base facts using deductive rules. Therefore, requests to update view (or derived) facts must be appropriately translated into correct updates of the underlying base facts. There is a close relationship between updating a deductive database and maintaining integrity constraints because, in general, integrity constraints can only be violated when performing an update. For instance, updates of base facts obtained as a result of view updating could violate some integrity constraint. On the other hand, to repair an integrity constraint could require to solve the view update problem when integrity constraint may be defined by some derived predicate.In this thesis, we propose a method that deals satisfactorily and efficiently with both problems in an integrated way. In this sense, given an update request, our method automatically translates it into all possible ways of changing the extensional database such that the update request is satisfied and no integrity constraint is violated. Concretely, we formally define the proposed method and we prove its soundness and completeness. The method is sound and complete in the sense that it provides all possible ways to satisfy an update request and that each provided solution satisfies the update request and does not violate any integrity constraint. Moreover, to compare how our method extends previous work in the area, we have proposed a general framework that allows us to classify and to compare previous research in the field of view updating and integrity constraint maintenance. This framework is based on taking into account five relevant dimensions that participate into this process, i.e. the kind of update requests, the database schema considered, the problem addressed, the solutions obtained and the technique used to obtain these solutions. Efficiency issues are also addressed in our approach, either for integrity maintenance as well as for view updating.To perform integrity maintenance efficiently, we propose a technique for determining the order in which integrity constraints should be handled. This technique is based on the generation at compile time of a graph, the Precedence Graph, which states the relationships between potential violations and potential repairs of integrity constraints. This graph is used at run-time to determine the proper order to check and repair integrity constraints. This order reduces significantly the number of times that each integrity constraint needs to be reconsidered after any integrity constraint repair. To improve efficiency during view updating, we propose to perform an initial analysis of the update request, the database contents and the rules of the database. The purpose of this analysis is to minimize the number of accesses to the base facts needed to translate a view update request and to explore only relevant alternatives that may lead to valid solutions of the update request. Furthermore, a detailed comparison with respect to some methods for integrity maintenance that consider efficiency issues is also provided, showing several contributions of our approach

    Fractals for Secondary Key Retrieval

    Get PDF
    In this paper we propose the use of fractals and especially the Hilbert curve, in order to design good distance-preserving mappings. Such mappings improve the performance of secondary-key- and spatial- access methods, where multi-dimensional points have to be stored on an 1-dimensional medium (e.g., disk). Good clustering reduces the number of disk accesses on retrieval, improving the response time. Our experiments on range queries and nearest neighbor queries showed that the proposed Hilbert curve achieves better clustering than older methods ("bit-shuffling", or Peano curve), for every situation we tried. (Also cross-referenced as UMIACS-TR-89-47

    Disjunctively incomplete information in relational databases: modeling and related issues

    Get PDF
    In this dissertation, the issues related to the information incompleteness in relational databases are explored. In general, this dissertation can be divided into two parts. The first part extends the relational natural join operator and the update operations of insertion and deletion to I-tables, an extended relational model representing inclusively indefinite and maybe information, in a semantically correct manner. Rudimentary or naive algorithms for computing natural joins on I-tables require an exponential number of pair-up operations and block accesses proportional to the size of I-tables due to the combinatorial nature of natural joins on I-tables. Thus, the problem becomes intractable for large I-tables. An algorithm for computing natural joins under the extended model which reduces the number of pair-up operations to a linear order of complexity in general and in the worst case to a polynomial order of complexity with respect to the size of I-tables is proposed in this dissertation. In addition, this algorithm also reduces the number of block accesses to a linear order of complexity with respect to the size of I-tables;The second part is related to the modeling aspect of incomplete databases. An extended relational model, called E-table, is proposed. E-table is capable of representing exclusively disjunctive information. That is, disjunctions of the form P[subscript]1\mid P[subscript]2\mid·s\mid P[subscript]n, where ǁ denotes a generalized logical exclusive or indicating that exactly one of the P[subscript]i\u27s can be true. The information content of an E-table is precisely defined and relational operators of selection, projection, difference, union, intersection, and cartisian product are extended to E-tables in a semantically correct manner. Conditions under which redundancies could arise due to the presence of exclusively disjunctive information are characterized and the procedure for resolving redundancies is presented;Finally, this dissertation is concluded with discussions on the directions for further research in the area of incomplete information modeling. In particular, a sketch of a relational model, IE-table (Inclusive and Exclusive table), for representing both inclusively and exclusively disjunctive information is provided

    Policiamento preditivo : pressupostos para a implantação de um sistema de gestão de recursos operacionais da Polícia Militar

    Get PDF
    No Brasil, a segurança pública continua sendo um dos temas mais preocupantes e complexos, demonstrando que, apesar da evolução da sociedade, ainda existe um longo caminho para segurança preditiva e não reativa. O policiamento preditivo é uma ferramenta prática da segurança pública que utiliza dados estatísticos de séries temporais para criação de algoritmos e modelos de predição, subsidiando a geração de estratégias de segurança pública e alocação de recursos policiais. Neste sentido, o presente trabalho objetivou construir um modelo de predição (algoritmo) das ocorrências policiais no município de Porto Alegre, a partir de dados de uma série temporal de 2014-2018. Foi desenvolvido um design conceitual por meio da importância dos Batalhões de Polícia Militar (BPM) através da análise da estrutura e funcionamento da mesma. O design de pesquisa foi realizado a partir da análise de dados, escolha das variáveis relevantes ao cenário local, passando por validação interna e tratamento extensivo do banco de dados criado a partir dos dados recebidos da Segurança Pública do Estado, dados dos BPM, Metodologia SIGA e Observa POA e, finalmente, a proposição do algoritmo. Apresentou-se o Comando de Policiamento da Capital, composto por seis BPM e as características de eventos ocorridos, destacando-se as ocorrências, criminais e não criminais, e as estatísticas descritivas acerca de características sociológicas dos eventos. A partir destes resultados e de uma extensa revisão de literatura de policiamento preditivo, foram escolhidos os seguintes atributos para a construção do modelo: tempo de resposta aos atendimentos, distribuição equitativa da carga de trabalho e adequação da geometria. A partir do banco de dados, foi criado algoritmo preditivo, validado e elaborado mapas para visualização gráfica dos algoritmos. A representação gráfica auxilia no entendimento das informações e análises do policiamento preditivo e proporciona subsídios para implementação de um sistema de gestão de recursos operacionais da Polícia MilitarIn Brazil, public security continues to be one of the most worrying and complex issues, demonstrating that, despite the evolution of society, there is still a long way to go for predictive and non-reactive security. Predictive policing is a practical public security tool that uses time series statistical data to create algorithms and prediction models that support the generation of public security strategies and allocation of police resources. In this sense, this study aimed to build a prediction model (algorithm) of police occurrences in the city of Porto Alegre, based on data from a 2014-2018 time series. A conceptual design was developed through the importance of the Military Police Battalions (MPB) through the analysis of its structure and functioning and the research design from data analysis, choice of variables relevant to the local scenario (going through internal validation and extensive treatment of the database created from data received from the Public Security of the State, data from BPMs, SIGA Methodology and Observa POA and, finally, the proposal of the algorithm. The Capital Policing Command was presented, composed of six BPMs and the characteristics of events that occurred, highlighting the criminal and non-criminal occurrences and descriptive statistics about the sociological characteristics of the events. Based on these results and an extensive review of the predictive policing literature, the following attributes were chosen for the construction of the model: response time to calls, equitable workload distribution, and geometry adequacy. From the created database, a predictive algorithm was created, validated, and prepared maps for graphic visualization of the algorithms. The graphical representation helps in understanding the information and analysis of predictive policing and provides subsidies for the implementation of an operational resource management system for the Military Police
    corecore