108 research outputs found

    Gli albori della cartografia geologica italiana all’Esposizione Universale di Parigi del 1878

    Get PDF
    L’esposizione Universale di Parigi del 1878 rappresentĂČ un momento fondamentale per la geologia italiana. Solo pochi anni prima, immediatamente dopo l’unificazione del Paese, erano state istituite le strutture tecnico-scientifiche di riferimento nazionale per questa disciplina: nel 1867 venne fondato il Regio Comitato geologico e qualche anno piĂč tardi, nel 1873, il Regio Ufficio geologico. Il primo aveva il compito di coordinare l’attivitĂ  di realizzazione della cartografiageologica a copertura del territorio nazionale, che doveva essere materialmente effettuata dai geologi e ingegneri rilevatori dell’Ufficio geologico, posto allora alle dipendenze del Corpo Reale delle Miniere del Ministero dell’Agricoltura, Industria e Commercio. L’Ufficio geologico presentĂČ all’Esposizione Universale di Parigi una serie di carte geologiche e tematiche, sia a scala di dettaglio che di sintesi, insieme ad una serie di lavori cartografici realizzati, in proprio, da diversi Autori. A questa serie di prodotti, che rappresentava il meglio della produzione cartografica in ambito geologico del nostro Paese, venne riconosciuto un elevato valore tecnico-scientifico e fu oggetto di numerosi premi e riconoscimenti Oltre alle premiazioni ufficiali decretate dalla giuria dell’Esposizione,un riconoscimento meno evidente, ma per la comunitĂ  dei geologi sicuramente piĂč importante, fu l’assegnazione all’Italia dell’organizzazione del II Congresso Internazionale di Geologia, che si svolse poi a Bologna nel 1881The Paris Universal Exhibition of 1878 represents a turning point for geological surveys in Italy. Just a few years earlier, that is immediately after the Country Unification, the two national technical and scientific structures for this discipline were established: the Royal Geological Committee in 1867 and the Royal Geological Survey in 1873. The former’s task was to coordinate the activities for the realization of the national geological mapping, which would be carried out by the geologists and engineers of the latter, at the time at the service of the Royal Corps of Mines of the Ministry of Agriculture, Industry and Commerce. During the Paris Universal Exposition, the Royal Geological Survey showed a series of geologic and thematic maps, both in detail and synthesis scale, together with a series of cartographic production realized by some other authors on their own. At that time, this wide range of works represented the best outcome in the field of geological production in our Country, to which also the Exposition Jury attached a high scientific and technical value and eventually awarded them numerous prizes. Besides the official ones, the most important recognition, at least for the community of geological scientists, was the assignment to Italy to organize the 2nd International Geological Congress, which then took place in Bologna in 1881

    Epistemic Integrity Constraints for Ontology-Based Data Management

    Get PDF
    Ontology-based data management (OBDM) is a powerful knowledge-oriented paradigm for managing data spread over multiple heterogeneous sources. In OBDM, the data sources of an information system are handled through the reconciled view provided by an ontology, i.e., the conceptualization of the underlying domain of interest expressed in some formal language. In any information systems where the basic knowledge resides in data sources, it is of paramount importance to specify the acceptable states of such information. Usually, this is done via integrity constraints, i.e., requirements that the data must satisfy formally expressed in some specific language. However, while the semantics of integrity constraints are clear in the context of databases, the presence of inferred information, typical of OBDM systems, considerably complicates the matter. In this paper, we establish a novel framework for integrity constraints in the OBDM scenarios, based on the notion of knowledge state of the information system. For integrity constraints in this framework, we define a language based on epistemic logic, and study decidability and complexity of both checking satisfaction and performing different forms of static analysis on them

    Benchmarking Approximate Consistent Query Answering

    Get PDF

    Queries with Arithmetic on Incomplete Databases

    Get PDF
    International audienceThe standard notion of query answering over incomplete database is that of certain answers, guaranteeing correctness regardless of how incomplete data is interpreted. In majority of real-life databases,relations have numerical columns and queries use arithmetic and comparisons. Even though the notion of certain answers still applies,we explain that it becomes much more problematic in situations when missing data occurs in numerical columns. We propose a new general framework that allows us to assign a measure of certainty to query answers. We test it in the agnostic scenario where we do not have prior information about values of numerical attributes, similarly to the predominant approach in handling incomplete data which assumes that each null can be interpreted as an arbitrary value of the domain. The key technical challenge is the lack of a uniform distribution over the entire domain of numerical attributes, such as real numbers. We overcome this by associating the measure of certainty with the asymptotic behaviorof volumes of some subsets of the Euclidean space. We show that this measure is well-defined, and describe approaches to computing and approximating it. While it can be computationally hard, or result in an irrational number, even for simple constraints, we produce polynomial-time randomized approximation schemes with multiplicative guarantees for conjunctive queries, and with additive guarantees for arbitrary first-order queries. We also describe a set of experimental results to confirm the feasibility of this approach

    Data Accuracy as Knowledge in Ontology Based Data Access (preliminary report)

    Get PDF
    In the context of Ontology Based Data Access (OBDA), consistency of data ensures that the data sources are coherent with the rules of the domain of interest represented by the ontology. However, even when consistency holds, the data underlying an OBDA system can still be in a state that users perceive of poor quality, according to some intuitive requirements. In many of these cases, the mechanism currently used to specify an OBDA system seems to lack of the ability to express such requirements. In this work, we argue that those requirements are often not about the world that the ontology represents, but about the knowledge that the system possesses on the world. Thus, with the aim of formalizing data quality specifications in the OBDA context, we propose the usage of a language of modal constraints, and show how they can be used in practice to capture cases of poor data quality. For this novel class of assertions, and for OBDA systems where the ontology is expressed in DL-Lite, we present algorithms and complexity results for the problem of checking the accuracy of the knowledge that the system posses, i.e., whether the system respects the modal constraints in the specification

    Reasoning about Measures of Unmeasurable Sets

    Get PDF
    International audienceIn a variety of reasoning tasks, one estimates the likelihood of events by means of volumes of sets they define. Such sets need to be measurable, which is usually achieved by putting bounds, sometimes ad hoc, on them. We address the question how unbounded or unmeasurable sets can be measured nonetheless. Intuitively, we want to know how likely a randomly chosen point is to be in a given set, even in the absence of a uniform distribution over the entire space. To address this, we follow a recently proposed approach of taking intersection of a set with balls of increasing radius, and defining the measure by means of the asymptotic behavior of the proportion of such balls taken by the set. We show that this approach works for every set definable in first-order logic with the usual arithmetic over the reals (addition, multiplication, exponentiation, etc.), and every uniform measure over the space, of which the usual Lebesgue measure (area, volume, etc.) is an example. In fact we establish a correspondence between the good asymptotic behavior and the finiteness of the VC dimension of definable families of sets. Towards computing the measure thus defined, we show how to avoid the asymptotics and characterize it via a specific subset of the unit sphere. Using definability of this set, and known techniques for sampling from the unit sphere, we give two algorithms for estimating our measure of unbounded unmeasurable sets, with deterministic and probabilistic guarantees, the latter being more efficient. Finally we show that a discrete analog of this measure exists and is similarly well-behaved

    Propositional and predicate logics of incomplete information

    Get PDF
    International audienceOne of the most common scenarios of handling incomplete information occurs in relational databases. They describe in-complete knowledge with three truth values, using Kleene’s logic for propositional formulae and a rather peculiar exten-sion to predicate calculus. This design by a committee from several decades ago is now part of the standard adopted by vendors of database management systems. But is it really the right way to handle incompleteness in propositional and pred-icate logics?Our goal is to answer this question. Using an epistemic ap-proach, we first characterize possible levels of partial knowl-edge about propositions, which leads to six truth values. We impose rationality conditions on the semantics of the connec-tives of the propositional logic, and prove that Kleene’s logic is the maximal sublogic to which the standard optimization rules apply, thereby justifying this design choice. For exten-sions to predicate logic, however, we show that the additional truth values are not necessary: every many-valued extension of first-order logic over databases with incomplete informa-tion represented by null values is no more powerful than the usual two-valued logic with the standard Boolean interpreta-tion of the connectives. We use this observation to analyze the logic underlying SQL query evaluation, and conclude that the many-valued extension for handling incompleteness does not add any expressiveness to it

    Model-theoretic Characterizations of Rule-based Ontologies

    Get PDF

    Coping with Incomplete Data: Recent Advances

    Get PDF
    International audienceHandling incomplete data in a correct manner is a notoriously hard problem in databases. Theoretical approaches rely on the computationally hard notion of certain answers, while practical solutions rely on ad hoc query evaluation techniques based on threevalued logic. Can we find a middle ground, and produce correct answers efficiently? The paper surveys results of the last few years motivated by this question. We reexamine the notion of certainty itself, and show that it is much more varied than previously thought. We identify cases when certain answers can be computed efficiently and, short of that, provide deterministic and probabilistic approximation schemes for them. We look at the role of three-valued logic as used in SQL query evaluation, and discuss the correctness of the choice, as well as the necessity of such a logic for producing query answers
    • 

    corecore