158 research outputs found

    Reasoning about complex agent knowledge - Ontologies, Uncertainty, rules and beyond

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Toward knowledge-based automatic 3D spatial topological modeling from LiDAR point clouds for urban areas

    Get PDF
    Le traitement d'un très grand nombre de données LiDAR demeure très coûteux et nécessite des approches de modélisation 3D automatisée. De plus, les nuages de points incomplets causés par l'occlusion et la densité ainsi que les incertitudes liées au traitement des données LiDAR compliquent la création automatique de modèles 3D enrichis sémantiquement. Ce travail de recherche vise à développer de nouvelles solutions pour la création automatique de modèles géométriques 3D complets avec des étiquettes sémantiques à partir de nuages de points incomplets. Un cadre intégrant la connaissance des objets à la modélisation 3D est proposé pour améliorer la complétude des modèles géométriques 3D en utilisant un raisonnement qualitatif basé sur les informations sémantiques des objets et de leurs composants, leurs relations géométriques et spatiales. De plus, nous visons à tirer parti de la connaissance qualitative des objets en reconnaissance automatique des objets et à la création de modèles géométriques 3D complets à partir de nuages de points incomplets. Pour atteindre cet objectif, plusieurs solutions sont proposées pour la segmentation automatique, l'identification des relations topologiques entre les composants de l'objet, la reconnaissance des caractéristiques et la création de modèles géométriques 3D complets. (1) Des solutions d'apprentissage automatique ont été proposées pour la segmentation sémantique automatique et la segmentation de type CAO afin de segmenter des objets aux structures complexes. (2) Nous avons proposé un algorithme pour identifier efficacement les relations topologiques entre les composants d'objet extraits des nuages de points afin d'assembler un modèle de Représentation Frontière. (3) L'intégration des connaissances sur les objets et la reconnaissance des caractéristiques a été développée pour inférer automatiquement les étiquettes sémantiques des objets et de leurs composants. Afin de traiter les informations incertitudes, une solution de raisonnement automatique incertain, basée sur des règles représentant la connaissance, a été développée pour reconnaître les composants du bâtiment à partir d'informations incertaines extraites des nuages de points. (4) Une méthode heuristique pour la création de modèles géométriques 3D complets a été conçue en utilisant les connaissances relatives aux bâtiments, les informations géométriques et topologiques des composants du bâtiment et les informations sémantiques obtenues à partir de la reconnaissance des caractéristiques. Enfin, le cadre proposé pour améliorer la modélisation 3D automatique à partir de nuages de points de zones urbaines a été validé par une étude de cas visant à créer un modèle de bâtiment 3D complet. L'expérimentation démontre que l'intégration des connaissances dans les étapes de la modélisation 3D est efficace pour créer un modèle de construction complet à partir de nuages de points incomplets.The processing of a very large set of LiDAR data is very costly and necessitates automatic 3D modeling approaches. In addition, incomplete point clouds caused by occlusion and uneven density and the uncertainties in the processing of LiDAR data make it difficult to automatic creation of semantically enriched 3D models. This research work aims at developing new solutions for the automatic creation of complete 3D geometric models with semantic labels from incomplete point clouds. A framework integrating knowledge about objects in urban scenes into 3D modeling is proposed for improving the completeness of 3D geometric models using qualitative reasoning based on semantic information of objects and their components, their geometric and spatial relations. Moreover, we aim at taking advantage of the qualitative knowledge of objects in automatic feature recognition and further in the creation of complete 3D geometric models from incomplete point clouds. To achieve this goal, several algorithms are proposed for automatic segmentation, the identification of the topological relations between object components, feature recognition and the creation of complete 3D geometric models. (1) Machine learning solutions have been proposed for automatic semantic segmentation and CAD-like segmentation to segment objects with complex structures. (2) We proposed an algorithm to efficiently identify topological relationships between object components extracted from point clouds to assemble a Boundary Representation model. (3) The integration of object knowledge and feature recognition has been developed to automatically obtain semantic labels of objects and their components. In order to deal with uncertain information, a rule-based automatic uncertain reasoning solution was developed to recognize building components from uncertain information extracted from point clouds. (4) A heuristic method for creating complete 3D geometric models was designed using building knowledge, geometric and topological relations of building components, and semantic information obtained from feature recognition. Finally, the proposed framework for improving automatic 3D modeling from point clouds of urban areas has been validated by a case study aimed at creating a complete 3D building model. Experiments demonstrate that the integration of knowledge into the steps of 3D modeling is effective in creating a complete building model from incomplete point clouds

    REVISÃO REVIEW 26

    Get PDF

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    Integration of Logic and Probability in Terminological and Inductive Reasoning

    Get PDF
    This thesis deals with Statistical Relational Learning (SRL), a research area combining principles and ideas from three important subfields of Artificial Intelligence: machine learn- ing, knowledge representation and reasoning on uncertainty. Machine learning is the study of systems that improve their behavior over time with experience; the learning process typi- cally involves a search through various generalizations of the examples, in order to discover regularities or classification rules. A wide variety of machine learning techniques have been developed in the past fifty years, most of which used propositional logic as a (limited) represen- tation language. Recently, more expressive knowledge representations have been considered, to cope with a variable number of entities as well as the relationships that hold amongst them. These representations are mostly based on logic that, however, has limitations when reason- ing on uncertain domains. These limitations have been lifted allowing a multitude of different formalisms combining probabilistic reasoning with logics, databases or logic programming, where probability theory provides a formal basis for reasoning on uncertainty. In this thesis we consider in particular the proposals for integrating probability in Logic Programming, since the resulting probabilistic logic programming languages present very in- teresting computational properties. In Probabilistic Logic Programming, the so-called "dis- tribution semantics" has gained a wide popularity. This semantics was introduced for the PRISM language (1995) but is shared by many other languages: Independent Choice Logic, Stochastic Logic Programs, CP-logic, ProbLog and Logic Programs with Annotated Disjunc- tions (LPADs). A program in one of these languages defines a probability distribution over normal logic programs called worlds. This distribution is then extended to queries and the probability of a query is obtained by marginalizing the joint distribution of the query and the programs. The languages following the distribution semantics differ in the way they define the distribution over logic programs. The first part of this dissertation presents techniques for learning probabilistic logic pro- grams under the distribution semantics. Two problems are considered: parameter learning and structure learning, that is, the problems of inferring values for the parameters or both the structure and the parameters of the program from data. This work contributes an algorithm for parameter learning, EMBLEM, and two algorithms for structure learning (SLIPCASE and SLIPCOVER) of probabilistic logic programs (in particular LPADs). EMBLEM is based on the Expectation Maximization approach and computes the expectations directly on the Binary De- cision Diagrams that are built for inference. SLIPCASE performs a beam search in the space of LPADs while SLIPCOVER performs a beam search in the space of probabilistic clauses and a greedy search in the space of LPADs, improving SLIPCASE performance. All learning approaches have been evaluated in several relational real-world domains. The second part of the thesis concerns the field of Probabilistic Description Logics, where we consider a logical framework suitable for the Semantic Web. Description Logics (DL) are a family of formalisms for representing knowledge. Research in the field of knowledge repre- sentation and reasoning is usually focused on methods for providing high-level descriptions of the world that can be effectively used to build intelligent applications. Description Logics have been especially effective as the representation language for for- mal ontologies. Ontologies model a domain with the definition of concepts and their properties and relations. Ontologies are the structural frameworks for organizing information and are used in artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, etc. They should also allow to ask questions about the concepts and in- stances described, through inference procedures. Recently, the issue of representing uncertain information in these domains has led to probabilistic extensions of DLs. The contribution of this dissertation is twofold: (1) a new semantics for the Description Logic SHOIN(D) , based on the distribution semantics for probabilistic logic programs, which embeds probability; (2) a probabilistic reasoner for computing the probability of queries from uncertain knowledge bases following this semantics. The explanations of queries are encoded in Binary Decision Diagrams, with the same technique employed in the learning systems de- veloped for LPADs. This approach has been evaluated on a real-world probabilistic ontology

    Способи обчислення міри неконсистентностей OWL онтологій

    Get PDF
    Актуальність теми. Розвиток інформаційно-телекомунікаційних технологій сприяє збільшенню обсягів інформації, необхідної для роботи корпоративних систем. Тому на сьогодні існує проблема ефективної обробки даних. Одним із варіантів рішення цієї задачі є обробка даних в системах з використанням онтологій. Онтологія — формалізоване представлення знань про певну предметну область, придатне для автоматизованої обробки. Таким чином дані охоплюють менший об'єм пам'яті, а інформації з них можна отримати більше. Розмір онтологій невпинно зростає, тому неконсистентність або внутрішнє протиріччя онтології в таких випадках є звичним явищем. Для обробки та аналізу таких онтологій необхідно застосовувати способи обчислення міри неконсистентності, які і будуть розглянуті в даній дисертаційній роботі. Об’єктом дослідження є онтологічні системи, некосистентність при побудові онтологій. Предметом дослідження є способи обчислення міри неконсистентності OWL онтологій. Методи дослідження – методи математичної статистики для аналізу обчислення міри некосистентності OWL онтологій. Мета роботи: підвищення ефективності обробки неконсистентних онтологій шляхом застосування обчислення міри невідновідності; адаптація підходів до обчислення міри неконсистентності онтологій в описовій логіці до OWL онтологій; оптимізація способів обчислення міри неконсистентності онтологій задля зменшення часу їх виконання.Actuality of subject. The development of information and telecommunication technologies contributes to the increase of the amount of information necessary for the work of corporate systems. Therefore, today there is a problem of efficient data processing. One of the solutions to this problem is the processing of data in systems using ontologies. Ontology is a formalized representation of knowledge about a particular subject area, suitable for automated processing. This way, the data covers a smaller amount of memory, and more information can be obtained from it. The size of the ontologies is constantly increasing, so the inconsistency or internal contradiction of ontology in such cases is a common occurrence. For the processing and analysis of such ontologies, it is necessary to use methods for calculating the degree of inconsistency, which will be considered in this thesis. The object of the study is ontological systems, non-consistency in the construction of ontologies. The subject of the study is how to calculate the degree of non-consistency of OWL ontologies. Methods of research - methods of mathematical statistics for the analysis of the calculation of the degree of non-consistency of OWL ontologies. The purpose of the work: to increase the efficiency of processing inconsistent ontologies by applying the calculation of the degree of noncompliance; adaptation of approaches to calculating the degree of inconsistency of ontologies in descriptive logic to OWL ontologies; optimization of methods for calculating the degree of inconsistency of ontologies to reduce the time of their implementation.Актуальность темы. Развитие информационно- телекоммуникационных технологий способствует увеличению объемов информации, необходимой для работы корпоративных систем. Поэтому на сегодняшний день существует проблема эффективной обработки данных. Одним из вариантов решения этой задачи является обработка данных в системах с использованием онтологий. Онтология - формализованное представление знаний об определенной предметной области, пригодное для автоматизированной обработки. Таким образом данные охватывают меньший объем памяти, а информации по ним можно больше. Размер онтологий постоянно растет, поэтому неконсистентнисть или внутреннее противоречие онтологии в таких случаях является обычным явлением. Для обработки и анализа таких онтологий необходимо применять способы вычисления степени неконсистентности, которые и будут рассмотрены в данной диссертационной работе. Объектом исследования является онтологические системы, некосистентнисть при построении онтологий. Предметом исследования являются способы вычисления степени неконсистентности OWL онтологий. Методы исследования - методы математической статистики для анализа вычисления меры некосистентности OWL онтологий. Цель работы: повышение эффективности обработки неконсистентних онтологий путем применения вычисления меры несоответствия; адаптация подходов к вычислению степени неконсистентности онтологий в описательной логике в OWL онтологии; оптимизация способов вычисления меры неконсистентности онтологий для уменьшения времени их выполнения

    Automated Reasoning

    Get PDF
    This volume, LNAI 13385, constitutes the refereed proceedings of the 11th International Joint Conference on Automated Reasoning, IJCAR 2022, held in Haifa, Israel, in August 2022. The 32 full research papers and 9 short papers presented together with two invited talks were carefully reviewed and selected from 85 submissions. The papers focus on the following topics: Satisfiability, SMT Solving,Arithmetic; Calculi and Orderings; Knowledge Representation and Jutsification; Choices, Invariance, Substitutions and Formalization; Modal Logics; Proofs System and Proofs Search; Evolution, Termination and Decision Prolems. This is an open access book
    corecore