2,580 research outputs found

    Dimensional Inconsistency Measures and Postulates in Spatio-Temporal Databases

    Get PDF
    The problem of managing spatio-temporal data arises in many applications, such as location-based services, environmental monitoring, geographic information systems, and many others. Often spatio-temporal data arising from such applications turn out to be inconsistent, i.e., representing an impossible situation in the real world. Though several inconsistency measures have been proposed to quantify in a principled way inconsistency in propositional knowledge bases, little effort has been done so far on inconsistency measures tailored for the spatio-temporal setting.In this paper, we define and investigate new measures that are particularly suitable for dealing with inconsistent spatio-temporal information, because they explicitly take into account the spatial and temporal dimensions, as well as the dimension concerning the identifiers of the monitored objects. Specifically, we first define natural measures that look at individual dimensions (time, space, and objects), and then propose measures based on the notion of a repair. We then analyze their behavior w.r.t. common postulates defined for classical propositional knowledge bases, and find that the latter are not suitable for spatio-temporal databases, in that the proposed inconsistency measures do not often satisfy them. In light of this, we argue that also postulates should explicitly take into account the spatial, temporal, and object dimensions and thus define ?dimension-aware? counterparts of common postulates, which are indeed often satisfied by the new inconsistency measures. Finally, we study the complexity of the proposed inconsistency measures.Fil: Grant, John. Towson University; Estados UnidosFil: Martinez, Maria Vanina. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Investigación en Ciencias de la Computación. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Investigación en Ciencias de la Computación; ArgentinaFil: Molinaro, Cristian. Università della Calabria; ItaliaFil: Parisi, Francesco. Università della Calabria; Itali

    The Basic Principles of Uncertain Information Fusion. An organized review of merging rules in different representation frameworks

    Get PDF
    We propose and advocate basic principles for the fusion of incomplete or uncertain information items, that should apply regardless of the formalism adopted for representing pieces of information coming from several sources. This formalism can be based on sets, logic, partial orders, possibility theory, belief functions or imprecise probabilities. We propose a general notion of information item representing incomplete or uncertain information about the values of an entity of interest. It is supposed to rank such values in terms of relative plausibility, and explicitly point out impossible values. Basic issues affecting the results of the fusion process, such as relative information content and consistency of information items, as well as their mutual consistency, are discussed. For each representation setting, we present fusion rules that obey our principles, and compare them to postulates specific to the representation proposed in the past. In the crudest (Boolean) representation setting (using a set of possible values), we show that the understanding of the set in terms of most plausible values, or in terms of non-impossible ones matters for choosing a relevant fusion rule. Especially, in the latter case our principles justify the method of maximal consistent subsets, while the former is related to the fusion of logical bases. Then we consider several formal settings for incomplete or uncertain information items, where our postulates are instantiated: plausibility orderings, qualitative and quantitative possibility distributions, belief functions and convex sets of probabilities. The aim of this paper is to provide a unified picture of fusion rules across various uncertainty representation settings

    Personalizable Knowledge Integration

    Get PDF
    Large repositories of data are used daily as knowledge bases (KBs) feeding computer systems that support decision making processes, such as in medical or financial applications. Unfortunately, the larger a KB is, the harder it is to ensure its consistency and completeness. The problem of handling KBs of this kind has been studied in the AI and databases communities, but most approaches focus on computing answers locally to the KB, assuming there is some single, epistemically correct solution. It is important to recognize that for some applications, as part of the decision making process, users consider far more knowledge than that which is contained in the knowledge base, and that sometimes inconsistent data may help in directing reasoning; for instance, inconsistency in taxpayer records can serve as evidence of a possible fraud. Thus, the handling of this type of data needs to be context-sensitive, creating a synergy with the user in order to build useful, flexible data management systems. Inconsistent and incomplete information is ubiquitous and presents a substantial problem when trying to reason about the data: how can we derive an adequate model of the world, from the point of view of a given user, from a KB that may be inconsistent or incomplete? In this thesis we argue that in many cases users need to bring their application-specific knowledge to bear in order to inform the data management process. Therefore, we provide different approaches to handle, in a personalized fashion, some of the most common issues that arise in knowledge management. Specifically, we focus on (1) inconsistency management in relational databases, general knowledge bases, and a special kind of knowledge base designed for news reports; (2) management of incomplete information in the form of different types of null values; and (3) answering queries in the presence of uncertain schema matchings. We allow users to define policies to manage both inconsistent and incomplete information in their application in a way that takes both the user's knowledge of his problem, and his attitude to error/risk, into account. Using the frameworks and tools proposed here, users can specify when and how they want to manage/solve the issues that arise due to inconsistency and incompleteness in their data, in the way that best suits their needs

    Language Models with Rationality

    Full text link
    While large language models (LLMs) are proficient at question-answering (QA), it is not always clear how (or even if) an answer follows from their latent "beliefs". This lack of interpretability is a growing impediment to widespread use of LLMs. To address this, our goals are to make model beliefs and their inferential relationships explicit, and to resolve inconsistencies that may exist, so that answers are supported by interpretable chains of reasoning drawn from a consistent network of beliefs. Our approach, which we call REFLEX, is to add a rational, self-reflecting layer on top of the LLM. First, given a question, we construct a belief graph using a backward-chaining process to materialize relevant model beliefs (including beliefs about answer candidates) and their inferential relationships. Second, we identify and minimize contradictions in that graph using a formal constraint reasoner. We find that REFLEX significantly improves consistency (by 8%-11% absolute) without harming overall answer accuracy, resulting in answers supported by faithful chains of reasoning drawn from a more consistent belief system. This suggests a new style of system architecture in which an LLM extended with a rational layer can provide an interpretable window into system beliefs, add a systematic reasoning capability, and repair latent inconsistencies present in the LLM

    A local basis approximation approach for nonlinear parametric model order reduction

    Full text link
    The efficient condition assessment of engineered systems requires the coupling of high fidelity models with data extracted from the state of the system `as-is'. In enabling this task, this paper implements a parametric Model Order Reduction (pMOR) scheme for nonlinear structural dynamics, and the particular case of material nonlinearity. A physics-based parametric representation is developed, incorporating dependencies on system properties and/or excitation characteristics. The pMOR formulation relies on use of a Proper Orthogonal Decomposition applied to a series of snapshots of the nonlinear dynamic response. A new approach to manifold interpolation is proposed, with interpolation taking place on the reduced coefficient matrix mapping local bases to a global one. We demonstrate the performance of this approach firstly on the simple example of a shear-frame structure, and secondly on the more complex 3D numerical case study of an earthquake-excited wind turbine tower. Parametric dependence pertains to structural properties, as well as the temporal and spectral characteristics of the applied excitation. The developed parametric Reduced Order Model (pROM) can be exploited for a number of tasks including monitoring and diagnostics, control of vibrating structures, and residual life estimation of critical components.Comment: 23 pages, 28 figure
    corecore