212 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Toward relevant answers to queries on incomplete databases

    Get PDF
    Incomplete and uncertain information is ubiquitous in database management applications. However, the techniques specifically developed to handle incomplete data are not sufficient. Even the evaluation of SQL queries on databases containing NULL values remains a challenge after 40 years. There is no consensus on what an answer to a query on an incomplete database should be, and the existing notions often have limited applicability. One of the most prevalent techniques in the literature is based on finding answers that are certainly true, independently of how missing values are interpreted. However, this notion has yielded several conflicting formal definitions for certain answers. Based on the fact that incomplete data can be enriched by some additional knowledge, we designed a notion able to unify and explain the different definitions for certain answers. Moreover, the knowledge-preserving certain answers notion is able to provide the first well-founded definition of certain answers for the relational bag data model and value-inventing queries, addressing some key limitations of previous approaches. However, it doesn’t provide any guarantee about the relevancy of the answers it captures. To understand what would be relevant answers to queries on incomplete databases, we designed and conducted a survey on the everyday usage of NULL values among database users. One of the findings from this socio-technical study is that even when users agree on the possible interpretation of NULL values, they may not agree on what a satisfactory query answer is. Therefore, to be relevant, query evaluation on incomplete databases must account for users’ tasks and preferences. We model users’ preferences and tasks with the notion of regret. The regret function captures the task-dependent loss a user endures when he considers a database as ground truth instead of another. Thanks to this notion, we designed the first framework able to provide a score accounting for the risk associated with query answers. It allows us to define the risk-minimizing answers to queries on incomplete databases. We show that for some regret functions, regret-minimizing answers coincide with certain answers. Moreover, as the notion is more agile, it can capture more nuanced answers and more interpretations of incompleteness. A different approach to improve the relevancy of an answer is to explain its provenance. We propose to partition the incompleteness into sources and measure their respective contribution to the risk of answer. As a first milestone, we study several models to predict the evolution of the risk when we clean a source of incompleteness. We implemented the framework, and it exhibits promising results on relational databases and queries with aggregate and grouping operations. Indeed, the model allows us to infer the risk reduction obtained by cleaning an attribute. Finally, by considering a game theoretical approach, the model can provide an explanation for answers based on the contribution of each attributes to the risk

    First-Order Stable Model Semantics with Intensional Functions

    Full text link
    In classical logic, nonBoolean fluents, such as the location of an object, can be naturally described by functions. However, this is not the case in answer set programs, where the values of functions are pre-defined, and nonmonotonicity of the semantics is related to minimizing the extents of predicates but has nothing to do with functions. We extend the first-order stable model semantics by Ferraris, Lee, and Lifschitz to allow intensional functions -- functions that are specified by a logic program just like predicates are specified. We show that many known properties of the stable model semantics are naturally extended to this formalism and compare it with other related approaches to incorporating intensional functions. Furthermore, we use this extension as a basis for defining Answer Set Programming Modulo Theories (ASPMT), analogous to the way that Satisfiability Modulo Theories (SMT) is defined, allowing for SMT-like effective first-order reasoning in the context of ASP. Using SMT solving techniques involving functions, ASPMT can be applied to domains containing real numbers and alleviates the grounding problem. We show that other approaches to integrating ASP and CSP/SMT can be related to special cases of ASPMT in which functions are limited to non-intensional ones.Comment: 69 page

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Constructive approaches to Program Induction

    Get PDF
    Search is a key technique in artificial intelligence, machine learning and Program Induction. No matter how efficient a search procedure, there exist spaces that are too large to search effectively and they include the search space of programs. In this dissertation we show that in the context of logic-program induction (Inductive Logic Programming, or ILP) it is not necessary to search for a correct program, because if one exists, there also exists a unique object that is the most general correct program, and that can be constructed directly, without a search, in polynomial time and from a polynomial number of examples. The existence of this unique object, that we term the Top Program because of its maximal generality, does not so much solve the problem of searching a large program search space, as it completely sidesteps it, thus improving the efficiency of the learning task by orders of magnitude commensurate with the complexity of a program space search. The existence of a unique Top Program and the ability to construct it given finite resources relies on the imposition, on the language of hypotheses, from which programs are constructed, of a strong inductive bias with relevance to the learning task. In common practice, in machine learning, Program Induction and ILP, such relevant inductive bias is selected, or created, manually, by the human user of a learning system, with intuition or knowledge of the problem domain, and in the form of various kinds of program templates. In this dissertation we show that by abandoning the reliance on such extra-logical devices as program templates, and instead defining inductive bias exclusively as First- and Higher-Order Logic formulae, it is possible to learn inductive bias itself from examples, automatically, and efficiently, by Higher-Order Top Program construction. In Chapter 4 we describe the Top Program in the context of the Meta-Interpretive Learning approach to ILP (MIL) and describe an algorithm for its construction, the Top Program Construction algorithm (TPC). We prove the efficiency and accuracy of TPC and describe its implementation in a new MIL system called Louise. We support theoretical results with experiments comparing Louise to the state-of-the-art, search-based MIL system, Metagol, and find that Louise improves Metagol’s efficiency and accuracy. In Chapter 5 we re-frame MIL as specialisation of metarules, Second-Order clauses used as inductive bias in MIL, and prove that problem-specific metarules can be derived by specialisation of maximally general metarules, by MIL. We describe a sub-system of Louise, called TOIL, that learns new metarules by MIL and demonstrate empirically that the metarules learned by TOIL match those selected manually, while maintaining the accuracy and efficiency of learning. iOpen Acces

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Computational and Experimental Study of the Primary Atomisation Process under Different Injection Conditions

    Full text link
    [ES] El proceso de atomización primaria es el mecanismo por el cual una vena líquida se disgrega en un ambiente gaseoso. Este proceso está presente en muchas aplicaciones de ingeniería realizando diferentes tareas. En ocasiones es un paso previo antes de ser quemado, como en la industria energética o de propulsión, donde el objetivo es extraer la energía específica del líquido. En otros sectores, como el revestimiento o la extinción de incendios, el objetivo es maximizar el área cubierta por el chorro. Sin embargo, aunque la atomización es una parte fundamental de varios procesos industriales, está lejos de comprenderse por completo. El proceso de atomización es una mezcla de fenómenos de interacción gas-líquido dentro de un campo turbulento que tiene lugar en el campo cercano, que es la región más densa del chorro. Cuando se trata de arrojar luz sobre el proceso de atomización primaria, el problema principal es la falta de teorías físicas definitivas capaces de vincular los complejos eventos de ruptura con la turbulencia. El principal obstáculo que impide investigar el proceso de atomización primaria es la incapacidad de las técnicas ópticas clásicas para proporcionar información de la región densa del chorro. Solo en los últimos años, las nuevas técnicas basadas en rayos X podrían proporcionar nueva información sobre las características de la atomización cerca de la salida de la tobera. Esto también afecta a los modelos computacionales de atomización primaria que, al no disponer de información experimental sobre la región densa, requieren una calibración precisa de sus constantes para proporcionar resultados fiables en el campo lejano. Esta tesis se centra en mejorar el conocimiento del proceso de atomización primaria, especialmente en cómo las condiciones de inyección afectan el desarrollo del chorro en el campo cercano desde dos puntos de vista diferentes. Por un lado, con un enfoque computacional usando Direct Numerical Simulations y, por otro lado, experimentalmente usando Near-Field Microscopy. El estudio computacional se centra en variar los números de Reynolds y Weber de inyección. Los resultados muestran que aumentar el número de Reynolds mejora la desintegración del líquido, mostrando un aumento de las gotas generadas y una nube de gotas más fina. Sin embargo, la falta de un perfil turbulento de flujo de entrada completamente desarrollado conduce a comportamientos inesperados en la longitud de ruptura de la vena líquida que también aumenta con el número de Reynolds. El número de gotas también aumenta cuando aumenta el número de Weber, pero los tamaños característicos de las gotas siguen siendo los mismos. La longitud de ruptura no varía, lo que sugiere que las variaciones de la tensión superficial afectan la ruptura de las gotas y los ligamentos, pero no la desintegración del núcleo líquido en sí. Con los resultados obtenidos de ambos estudios, se propone un modelo fenomenológico que predice la distribución del tamaño de gota en función de las condiciones de inyección. Además, también se ha estudiado el efecto de usar toberas elípticas. Se ha obtenido que el número de gotas detectadas aumenta en comparación con el chorro redondo manteniendo ángulos de apertura del chorro similares. Sin embargo, cuando se utilizan toberas extremadamente excéntricas, la disminución de la turbulencia del flujo de entrada contrarresta los beneficios de este tipo de inyectores. En cuanto al análisis experimental, usar Near-Field Microscopy permite magnificar la región densa y analizar las características macroscópicas del chorro. Por lo tanto, se varían las presiones de inyección y descarga, centrándose en el ángulo de apertura del chorro. Se observa el aumento esperado en el ángulo al aumentar tanto la presión de inyección como la de descarga. Sin embargo, adicionalmente, se realiza un análisis de las perturbaciones del contorno del chorro, concluyendo que, al aumentar la presión de inyección, y por lo tanto la turbulencia del flujo de[CA] El procés d'atomització primària és el mecanisme pel qual una vena líquida es disgrega en un ambient gasós. Aquest procés és present en moltes aplicacions d'enginyeria fent diferents tasques. De vegades és un pas previ abans de ser cremat, com ara en la indústria energètica o de propulsió, on l'objectiu és extraure l'energia específica del líquid. En altres sectors, com ara el revestiment o l'extinció d'incendis, l'objectiu és maximitzar l'àrea coberta pel doll. No obstant això, tot i que l'atomització és una part fonamental de diversos processos industrials, està lluny de comprendre's per complet. El procés d'atomització és una barreja de fenòmens d'interacció gas-líquid dins d'un camp turbulent que té lloc en el camp pròxim, que és la regió més densa del doll. Quan es tracta de donar llum sobre el procés d'atomització primària, el problema principal és la falta de teories físiques definitives capaces de vincular els complexos esdeveniments de trencament amb la turbulència. El principal obstacle que impedeix investigar el procés d'atomització primària és la incapacitat de les tècniques òptiques clàssiques per a proporcionar informació de la regió densa del doll. Només en els últims anys, les noves tècniques basades en raigs X podrien proporcionar nova informació sobre les característiques de l'atomització prop de l'eixida de la tovera. Això també afecta els models computacionals d'atomització primària que, en no disposar d'informació experimental sobre la regió densa, requereixen un calibratge precís de les seues constants per a proporcionar resultats fiables en el camp llunyà. Aquesta tesi se centra a millorar el coneixement del procés d'atomització primària, especialment en com les condicions d'injecció afecten el desenvolupament del doll en el camp pròxim des de dos punts de vista diferents. D'una banda, amb un enfocament computacional usant Direct Numerical Simulations i, d'altra banda, experimentalment usant Near-Field Microscopy. L'estudi computacional se centra a variar els nombres de Reynolds i Weber d'injecció. Els resultats mostren que augmentar el nombre de Reynolds millora la desintegració del líquid, tot mostrant un augment de les gotes generades i un núvol de gotes més fi. No obstant això, la falta d'un perfil turbulent de flux d'entrada completament desenvolupat condueix a comportaments inesperats en la longitud de ruptura de la vena líquida que també augmenta amb el nombre de Reynolds. El nombre de gotes també augmenta quan creix el nombre de Weber, però les grandàries característiques de les gotes continuen sent les mateixes. La longitud de ruptura no varia, la qual cosa suggereix que les variacions de la tensió superficial afecten la ruptura de les gotes i els lligaments, però no la desintegració del nucli líquid en ell mateix. Amb els resultats obtinguts de tots dos estudis, es proposa un model fenomenològic que prediu la distribució de la grandària de gota en funció de les condicions d'injecció. A més, també s'ha estudiat l'efecte d'usar toveres el·líptiques. S'ha obtingut que el nombre de gotes detectades augmenta en comparació amb el doll redó tot mantenint angles d'obertura del doll similars. No obstant això, quan s'utilitzen toveres extremadament excèntriques, la disminució de la turbulència del flux d'entrada contraresta els beneficis d'aquesta mena d'injectors. Quant a l'anàlisi experimental, usar Near-Field Microscopy permet magnificar la regió densa i analitzar les característiques macroscòpiques del doll. Per tant, es varien les pressions d'injecció i descàrrega, tot centrant-se en l'angle d'obertura del doll. S'observa l'augment esperat en l'angle en augmentar tant la pressió d'injecció com la de descàrrega. No obstant això, addicionalment, es realitza una anàlisi de les pertorbacions del contorn del doll i es conclou que en augmentar la pressió d'injecció, i per tant la turbulència del flux d'entrada, augmenten les pertorbacions en el contorn del ruixat, especialment a pressions de descàrrega mé[EN] The primary atomisation process is the mechanism by which a liquid vein breaks into droplets in a gaseous ambient. This process is present in many engineering applications accomplishing different tasks. Sometimes it is a previous step before being burned, as in the energy or propulsion industry, where the objective is to extract the specific energy of the liquid. In other sectors, such as the coating or fire extinction, the objective is to maximise the area covered by the droplet cloud. However, although atomisation is a fundamental part of several industrial processes, it is far from fully understood. The atomisation process is a mixture of gas-liquid interaction phenomena within a turbulent field that takes place in the near-field, which is the denser region of the spray. When trying to shed light on the primary atomisation process, the main issue is the lack of definitive physical theories able to link the complex breakup events and the turbulence. The principal impediment that prevents the investigation from breaking through the atomisation process is the inability of the classic optical techniques to provide information from the dense region of the spray. Only in the last years, newer techniques based on X-Ray could provide new information on spray characteristics near the nozzle outlet. This also affects the computational primary atomisation models that, as there is no available experimental information on the dense region, require an accurate calibration of their constants to provide reliable results on the far-field. This thesis focuses on improving the knowledge of the primary atomisation process, especially on how the injection conditions affect the spray development in the near field from two different standpoints. On the one hand, with a computational approach using Direct Numerical Simulations and on the other hand, experimentally using Near-Field Microscopy. The computational study is focused on varying the inflow Reynolds and Weber numbers. Results show that increasing the Reynolds number improves the liquid disintegration, exhibiting an increase of generated droplets and a finer droplet cloud. However, the lack of a fully developed inflow turbulent profile leads to characteristic behaviours on the breakup length of the spray that also increases with the Reynolds number. The number of droplets increases when the Weber number increases, but the characteristic droplet sizes remain the same. The breakup length does not vary, suggesting that the surface tension variations affect the droplet and ligament breakup but not the core disintegration itself. With the results obtained from both studies, a phenomenological model is proposed to predict the droplet size distribution depending on the injection conditions. Additionally, using elliptical nozzles, the number of detected droplets increases compared with the round spray and maintain similar spray apertures. However, when using extremely eccentric nozzles, the inflow turbulence decrease counteracts the elliptical sprays' benefits. Regarding the experimental analysis, the Near-Field Microscopy magnifies the dense region and analyses the macroscopic features on the spray. So the injection and discharge pressure are varied, and the spotlight is put on the spray angle. The expected increase in the spray angle when increasing both the injection and discharge pressure is observed. Nevertheless, additionally, an analysis of the spray contour perturbations is performed, concluding that increasing the injection pressure, and thus the inflow turbulence, increases the perturbations on the spray contour, especially at lower discharge pressures.González Montero, LA. (2022). Computational and Experimental Study of the Primary Atomisation Process under Different Injection Conditions [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/19063

    Automated Reasoning

    Get PDF
    This volume, LNAI 13385, constitutes the refereed proceedings of the 11th International Joint Conference on Automated Reasoning, IJCAR 2022, held in Haifa, Israel, in August 2022. The 32 full research papers and 9 short papers presented together with two invited talks were carefully reviewed and selected from 85 submissions. The papers focus on the following topics: Satisfiability, SMT Solving,Arithmetic; Calculi and Orderings; Knowledge Representation and Jutsification; Choices, Invariance, Substitutions and Formalization; Modal Logics; Proofs System and Proofs Search; Evolution, Termination and Decision Prolems. This is an open access book

    Technology and regulation 2021

    Get PDF
    Technology and Regulation (TechReg) is an international journal of law, technology and society, with an interdisciplinary identity. TechReg provides an online platform for disseminating original research on the legal and regulatory challenges posed by existing and emerging technologies (and their applications) including, but by no means limited to, the Internet and digital technology, artificial intelligence and machine learning, robotics, neurotechnology, nanotechnology, biotechnology, energy and climate change technology, and health and food technology. This book contains Volume 3 (2021) of the journal

    Technology and regulation 2021

    Get PDF
    Technology and Regulation (TechReg) is an international journal of law, technology and society, with an interdisciplinary identity. TechReg provides an online platform for disseminating original research on the legal and regulatory challenges posed by existing and emerging technologies (and their applications) including, but by no means limited to, the Internet and digital technology, artificial intelligence and machine learning, robotics, neurotechnology, nanotechnology, biotechnology, energy and climate change technology, and health and food technology. This book contains Volume 3 (2021) of the journal
    corecore