71 research outputs found

    Local Belief Dynamics in Network Knowledge Bases

    Get PDF
    People are becoming increasingly more connected to each other as social networks continue to grow both in number and variety, and this is true for autonomous software agents as well. Taking them as a collection, such social platforms can be seen as one complex network with many different types of relations, different degrees of strength for each relation, and a wide range of information on each node. In this context, social media posts made by users are reflections of the content of their own individual (or local) knowledge bases; modeling how knowledge flows over the network? or how this can possibly occur? is therefore of great interest from a knowledge representation and reasoning perspective. In this article, we provide a formal introduction to the network knowledge base model, and then focus on the problem of how a single agents knowledge base changes when exposed to a stream of news items coming from other members of the network. We do so by taking the classical belief revision approach of first proposing desirable properties for how such a local operation should be carried out (theoretical characterization), arriving at three different families of local operators, exploring concrete algorithms (algorithmic characterization) for two of the families, and proving properties about the relationship between the two characterizations (representation theorem). One of the most important differences between our approach and the classical models of belief revision is that in our case the input is more complex, containing additional information about each piece of information.Fil: Gallo, Fabio Rafael. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Centro CientĂ­fico TecnolĂłgico Conicet - BahĂ­a Blanca. Instituto de Ciencias e IngenierĂ­a de la ComputaciĂłn. Universidad Nacional del Sur. Departamento de Ciencias e IngenierĂ­a de la ComputaciĂłn. Instituto de Ciencias e IngenierĂ­a de la ComputaciĂłn; ArgentinaFil: Simari, Gerardo. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Centro CientĂ­fico TecnolĂłgico Conicet - BahĂ­a Blanca. Instituto de Ciencias e IngenierĂ­a de la ComputaciĂłn. Universidad Nacional del Sur. Departamento de Ciencias e IngenierĂ­a de la ComputaciĂłn. Instituto de Ciencias e IngenierĂ­a de la ComputaciĂłn; ArgentinaFil: Martinez, Maria Vanina. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Oficina de CoordinaciĂłn Administrativa Ciudad Universitaria. Instituto de InvestigaciĂłn en Ciencias de la ComputaciĂłn. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de InvestigaciĂłn en Ciencias de la ComputaciĂłn; ArgentinaFil: Abad Santos, Natalia Vanesa. Universidad Nacional del Sur; Argentina. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas; ArgentinaFil: Falappa, Marcelo Alejandro. Consejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnicas. Centro CientĂ­fico TecnolĂłgico Conicet - BahĂ­a Blanca. Instituto de Ciencias e IngenierĂ­a de la ComputaciĂłn. Universidad Nacional del Sur. Departamento de Ciencias e IngenierĂ­a de la ComputaciĂłn. Instituto de Ciencias e IngenierĂ­a de la ComputaciĂłn; Argentin

    Multi-Winner Voting with Approval Preferences

    Get PDF
    Approval-based committee (ABC) rules are voting rules that output a fixed-size subset of candidates, a so-called committee. ABC rules select committees based on dichotomous preferences, i.e., a voter either approves or disapproves a candidate. This simple type of preferences makes ABC rules widely suitable for practical use. In this book, we summarize the current understanding of ABC rules from the viewpoint of computational social choice. The main focus is on axiomatic analysis, algorithmic results, and relevant applications.Comment: This is a draft of the upcoming book "Multi-Winner Voting with Approval Preferences

    Sur l'analyse statique des requĂȘtes SPARQL avec la logique modale

    Get PDF
    Static analysis is a core task in query optimization and knowledge base verification. We study static analysis techniques for SPARQL, the standard language for querying Semantic Web data. Specifically, we investigate the query containment problem and the query-update independence analysis. We are interested in developing techniques through reductions to the validity problem in logic.We address SPARQL query containment with optional matching. We focus on the class of well-designed SPARQL queries, proposed in the literature as a fragment of the language with good properties regarding query evaluation. SPARQL is interpreted over graphs, hence we encode it in a graph logic, specifically the modal logic K interpreted over label transition systems. We show that this logic is powerful enough to deal with query containment for the well-designed fragment of SPARQL. We show how to translate RDF graphs into transition systems and SPARQL queries into K-formulae. Therefore, query containment in SPARQL can be reduced to unsatisfiability in K.We also report on a preliminary overview of the SPARQL query-update problem. A query is independent of an update when the execution of the update does not affect the result of the query. Determining independence is especially useful in the contest of huge RDF repositories, where it permits to avoid expensive yet useless re-evaluation of queries. While this problem has been intensively studied for fragments of relational calculus, no works exist for the standard query language for the semantic web. We report on our investigations on how a notion of independence can be defined in the SPARQL contextL’analyse statique est une tĂąche essentielle dans l’optimisation des requĂȘtes et la vĂ©rification de la base de graphes RDF. Nous Ă©tudions des techniques d’analyse statique pour SPARQL, le langage standard pour l’interrogation des donnĂ©es du Web sĂ©mantique. Plus prĂ©cisĂ©ment, nous Ă©tudions le problĂšme d’inclusion des requĂȘtes et de l’analyse de l’indĂ©pendance entre les requĂȘtes et la mise Ă  jour de la base de graphes RDF.Nous sommes intĂ©ressĂ©s par le dĂ©veloppement de techniques grĂące Ă  des rĂ©ductions au problĂšme de la satisfaisabilitĂ© de la logique.Nous nous traitons le problĂšme d’inclusion des requĂȘtes SPARQL en prĂ©sence de l’opĂ©rateur OPTIONAL. L’optionalitĂ© est l’un des constructeurs les plus compliquĂ©s dans SPARQL et aussi celui qui rend ce langage plus expressif que les langages de requĂȘtes classiques, comme SQL.Nous nous concentrons sur la classe de requĂȘtes appelĂ©e "well-designed SPARQL", proposĂ©es dans la littĂ©rature comme un fragment du langage avec de bonnes propriĂ©tĂ©s en matiĂšre d’évaluation des requĂȘtes incluent l’opĂ©ration OPTIONAL. À ce jour, l’inclusion de requĂȘte a Ă©tĂ© testĂ©e Ă  l’aide de diffĂ©rentes techniques: homomorphisme de graphes, bases de donnĂ©es canoniques, techniques de la thĂ©orie des automates et rĂ©duction au problĂšme de la validitĂ© d’une logique. Dans cette thĂšse, nous utilisons la derniĂšre technique pour tester l’inclusion des requĂȘtes SPARQL avec OPTIONAL utilisant une logique expressive appelĂ©e «logique K». En utilisant cette technique, il est possible de rĂ©gler le problĂšme d’inclusion des requĂȘtes pour plusieurs fragment de SPARQL, mĂȘme en prĂ©sence de schĂ©mas. Cette extensibilitĂ© n’est pas garantie par les autres mĂ©thodes.Nous montrons comment traduire a graphe RDF en un systĂšme de transitions, ainsi que une requĂȘte SPARQL en une formula K. Avec ces traductions, l’inclusion des requĂȘtes dans SPARQL peut ĂȘtre rĂ©duite au test de la validitĂ© d’une formule logique. Un avantage de cette approche est d’ouvrir la voie pour des implĂ©mentations utilisant solveurs de satisfiabilitĂ© pour K.Nous prĂ©sentons un banc d’essais de tests d’inclusion pour les requĂȘtes SPARQL avec OPTIONAL. Nous avons effectuĂ© des expĂ©riences pour tester et comparer des solveurs d’inclusion de l’état de l’art.Nous prĂ©sentons Ă©galement un aperçu prĂ©liminaire du problĂšme d’indĂ©pendance entre requĂȘte et mise Ă  jour. Une requĂȘte est indĂ©pendante de la mise Ă  jour lorsque l’exĂ©cution de la mise Ă  jour ne modifie pas le rĂ©sultat de la requĂȘte. Bien que ce problĂšme ait Ă©tĂ© intensivement Ă©tudiĂ© pour des fragments de calcul relationnel, il n’existe pas de travaux pour le langage de requĂȘtes standard pour le web sĂ©mantique. Nous proposons une dĂ©finition de la notion de l’indĂ©pendance dans le contexte de SPARQL et nous Ă©tablissons des premiĂšres pistes de analyse statique dans certains situations d’inclusion entre une requĂȘte et une mise Ă  jour

    Temporospatial Context-Aware Vehicular Crash Risk Prediction

    Get PDF
    With the demand for more vehicles increasing, road safety is becoming a growing concern. Traffic collisions take many lives and cost billions of dollars in losses. This explains the growing interest of governments, academic institutions and companies in road safety. The vastness and availability of road accident data has provided new opportunities for gaining a better understanding of accident risk factors and for developing more effective accident prediction and prevention regimes. Much of the empirical research on road safety and accident analysis utilizes statistical models which capture limited aspects of crashes. On the other hand, data mining has recently gained interest as a reliable approach for investigating road-accident data and for providing predictive insights. While some risk factors contribute more frequently in the occurrence of a road accident, the importance of driver behavior, temporospatial factors, and real-time traffic dynamics have been underestimated. This study proposes a framework for predicting crash risk based on historical accident data. The proposed framework incorporates machine learning and data analytics techniques to identify driving patterns and other risk factors associated with potential vehicle crashes. These techniques include clustering, association rule mining, information fusion, and Bayesian networks. Swarm intelligence based association rule mining is employed to uncover the underlying relationships and dependencies in collision databases. Data segmentation methods are employed to eliminate the effect of dependent variables. Extracted rules can be used along with real-time mobility to predict crashes and their severity in real-time. The national collision database of Canada (NCDB) is used in this research to generate association rules with crash risk oriented subsequents, and to compare the performance of the swarm intelligence based approach with that of other association rule miners. Many industry-demanding datasets, including road-accident datasets, are deficient in descriptive factors. This is a significant barrier for uncovering meaningful risk factor relationships. To resolve this issue, this study proposes a knwoledgebase approximation framework to enhance the crash risk analysis by integrating pieces of evidence discovered from disparate datasets capturing different aspects of mobility. Dempster-Shafer theory is utilized as a key element of this knowledgebase approximation. This method can integrate association rules with acceptable accuracy under certain circumstances that are discussed in this thesis. The proposed framework is tested on the lymphography dataset and the road-accident database of the Great Britain. The derived insights are then used as the basis for constructing a Bayesian network that can estimate crash likelihood and risk levels so as to warn drivers and prevent accidents in real-time. This Bayesian network approach offers a way to implement a naturalistic driving analysis process for predicting traffic collision risk based on the findings from the data-driven model. A traffic incident detection and localization method is also proposed as a component of the risk analysis model. Detecting and localizing traffic incidents enables timely response to accidents and facilitates effective and efficient traffic flow management. The results obtained from the experimental work conducted on this component is indicative of the capability of our Dempster-Shafer data-fusion-based incident detection method in overcoming the challenges arising from erroneous and noisy sensor readings

    Description Logic for Scene Understanding at the Example of Urban Road Intersections

    Get PDF
    Understanding a natural scene on the basis of external sensors is a task yet to be solved by computer algorithms. The present thesis investigates the suitability of a particular family of explicit, formal representation and reasoning formalisms for this task, which are subsumed under the term Description Logic

    Modélisation et exploitation des connaissances pour les processus d'expertise collaborative

    Get PDF
    Les dĂ©marches d’expertise sont aujourd’hui mises en oeuvre dans de nombreux domaines, et plus particuliĂšrement dans le domaine industriel, pour Ă©valuer des situations, comprendre des problĂšmes ou encore anticiper des risques. PlacĂ©s en amont des problĂšmes complexes et mal dĂ©finis, elles servent Ă  la comprĂ©hension de ceux-ci et facilitent ainsi les prises de dĂ©cisions. Ces dĂ©marches sont devenues tellement gĂ©nĂ©ralisĂ©es qu’elles ont fait l’objet d’une norme (NF X 50-110) et d’un guide de recommandation Ă©ditĂ© en 2011 (FDX 50-046). Ces dĂ©marches reposent principalement sur la formulation d’hypothĂšses avec un certain doute par un ou plusieurs experts. Par la suite, ces hypothĂšses vont progressivement ĂȘtre validĂ©es ou invalidĂ©es au cours des diffĂ©rentes phases de la dĂ©marche par rapport aux connaissances disponibles. Ainsi, les certitudes accordĂ©es aux hypothĂšses vont connaĂźtre une Ă©volution au cours des dites phases et permettront d’avoir une certitude sur la comprĂ©hension d’un problĂšme en fonction des hypothĂšses valides. Bien que cette approche d’étude de problĂšmes ait fait l’objet d’une norme, elle manque d’outils automatiques ou semi-automatiques pour assister les experts du domaine lors des diffĂ©rentes phases exploratoires des problĂšmes. De plus, cette approche quasi manuelle manque des mĂ©canismes appropriĂ©s pour gĂ©rer les connaissances produites de maniĂšre Ă  ce qu’elles soient comprĂ©hensibles par les humains et manipulables par les machines. Avant de proposer des solutions Ă  ces limites de l’état actuel des processus d’expertise, une revue des Ă©tudes fondamentales et appliquĂ©es en logique, en reprĂ©sentation des connaissances pour l’expertise ou l’expĂ©rience, et en intelligence collaborative a Ă©tĂ© rĂ©alisĂ©e pour identifier les briques technologiques des solutions proposĂ©es. Une analyse de la norme NF X 50-100 a Ă©tĂ© menĂ©e pour comprendre les caractĂ©ristiques des Processus d’Expertise et comment ils peuvent ĂȘtre reprĂ©sentĂ©s formellement et utilisĂ©s comme retour d’expĂ©rience. Une Ă©tude a Ă©tĂ© menĂ©e sur des rapports d’expertise passĂ©s d’accidents d’avion pour trouver comment ils peuvent ĂȘtre reprĂ©sentĂ©s dans un format lisible par une machine, gĂ©nĂ©ral et extensible, indĂ©pendant du domaine et partageable entre les systĂšmes. Cette thĂšse apporte les contributions suivantes Ă  la dĂ©marche d’expertise : Une formalisation des connaissances et une mĂ©thodologie de rĂ©solution collaborative de problĂšmes en utilisant des hypothĂšses. Cette mĂ©thode est illustrĂ©e par un cas d’étude tirĂ© d’un problĂšme de l’industrie de production, dans lequel un produit fabriquĂ© a Ă©tĂ© rejetĂ© par des clients. La mĂ©thode dĂ©crit Ă©galement des mĂ©canismes d’infĂ©rence compatibles avec la reprĂ©sentation formelle proposĂ©e. Un raisonnement collaboratif non-monotone basĂ© sur la programmation logique par l’ensemble et la thĂ©orie d’incertitude utilisant les fonctions de croyance. Une reprĂ©sentation sĂ©mantique des rapports d’expertise basĂ©e sur les ontologies. PremiĂšrement, ces contributions ont permis une exĂ©cution formelle et systĂ©matique des Processus d’Expertise, avec une motivation centrĂ©e sur l’humain. Ensuite, elles favorisent leur utilisation pour un traitement appropriĂ© selon des propriĂ©tĂ©s essentielles telles que la traçabilitĂ©, la transparence, le raisonnement non-monotone et l’incertitude, en tenant compte du doute humain et de la connaissance limitĂ©e des experts. Enfin, ils fournissent une reprĂ©sentation sĂ©mantique lisible par l’homme et la machine pour les expertise rĂ©alisĂ©es

    Multi-Winner Voting with Approval Preferences

    Get PDF
    From fundamental concepts and results to recent advances in computational social choice, this open access book provides a thorough and in-depth look at multi-winner voting based on approval preferences. The main focus is on axiomatic analysis, algorithmic results and several applications that are relevant in artificial intelligence, computer science and elections of any kind. What is the best way to select a set of candidates for a shortlist, for an executive committee, or for product recommendations? Multi-winner voting is the process of selecting a fixed-size set of candidates based on the preferences expressed by the voters. A wide variety of decision processes in settings ranging from politics (parliamentary elections) to the design of modern computer applications (collaborative filtering, dynamic Q&A platforms, diversity in search results, etc.) share the problem of identifying a representative subset of alternatives. The study of multi-winner voting provides the principled analysis of this task. Approval-based committee voting rules (in short: ABC rules) are multi-winner voting rules particularly suitable for practical use. Their usability is founded on the straightforward form in which the voters can express preferences: voters simply have to differentiate between approved and disapproved candidates. Proposals for ABC rules are numerous, some dating back to the late 19th century while others have been introduced only very recently. This book explains and discusses these rules, highlighting their individual strengths and weaknesses. With the help of this book, the reader will be able to choose a suitable ABC voting rule in a principled fashion, participate in, and be up to date with the ongoing research on this topic

    DAG-Based Attack and Defense Modeling: Don't Miss the Forest for the Attack Trees

    Full text link
    This paper presents the current state of the art on attack and defense modeling approaches that are based on directed acyclic graphs (DAGs). DAGs allow for a hierarchical decomposition of complex scenarios into simple, easily understandable and quantifiable actions. Methods based on threat trees and Bayesian networks are two well-known approaches to security modeling. However there exist more than 30 DAG-based methodologies, each having different features and goals. The objective of this survey is to present a complete overview of graphical attack and defense modeling techniques based on DAGs. This consists of summarizing the existing methodologies, comparing their features and proposing a taxonomy of the described formalisms. This article also supports the selection of an adequate modeling technique depending on user requirements

    Efficient paraconsistent reasoning with rules and ontologies for the semantic web

    Get PDF
    Ontologies formalized by means of Description Logics (DLs) and rules in the form of Logic Programs (LPs) are two prominent formalisms in the field of Knowledge Representation and Reasoning. While DLs adhere to the OpenWorld Assumption and are suited for taxonomic reasoning, LPs implement reasoning under the Closed World Assumption, so that default knowledge can be expressed. However, for many applications it is useful to have a means that allows reasoning over an open domain and expressing rules with exceptions at the same time. Hybrid MKNF knowledge bases make such a means available by formalizing DLs and LPs in a common logic, the Logic of Minimal Knowledge and Negation as Failure (MKNF). Since rules and ontologies are used in open environments such as the Semantic Web, inconsistencies cannot always be avoided. This poses a problem due to the Principle of Explosion, which holds in classical logics. Paraconsistent Logics offer a solution to this issue by assigning meaningful models even to contradictory sets of formulas. Consequently, paraconsistent semantics for DLs and LPs have been investigated intensively. Our goal is to apply the paraconsistent approach to the combination of DLs and LPs in hybrid MKNF knowledge bases. In this thesis, a new six-valued semantics for hybrid MKNF knowledge bases is introduced, extending the three-valued approach by Knorr et al., which is based on the wellfounded semantics for logic programs. Additionally, a procedural way of computing paraconsistent well-founded models for hybrid MKNF knowledge bases by means of an alternating fixpoint construction is presented and it is proven that the algorithm is sound and complete w.r.t. the model-theoretic characterization of the semantics. Moreover, it is shown that the new semantics is faithful w.r.t. well-studied paraconsistent semantics for DLs and LPs, respectively, and maintains the efficiency of the approach it extends
    • 

    corecore