38,214 research outputs found

    An approach to description logic with support for propositional attitudes and belief fusion

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-540-89765-1_8Revised Selected and Invited Papers of ISWC International Workshops, URSW 2005-2007.In the (Semantic) Web, the existence or producibility of certain, consensually agreed or authoritative knowledge cannot be assumed, and criteria to judge the trustability and reputation of knowledge sources may not be given. These issues give rise to formalizations of web information which factor in heterogeneous and possibly inconsistent assertions and intentions, and make such heterogeneity explicit and manageable for reasoning mechanisms. Such approaches can provide valuable metaknowledge in contemporary application fields, like open or distributed ontologies, social software, ranking and recommender systems, and domains with a high amount of controversies, such as politics and culture. As an approach to this, we introduce a lean formalism for the Semantic Web which allows for the explicit representation of controversial individual and group opinions and goals by means of so-called social contexts, and optionally for the probabilistic belief merging of uncertain or conflicting statements. Doing so, our approach generalizes concepts such as provenance annotation and voting in the context of ontologies and other kinds of Semantic Web knowledgeThis work was partially funded by the German National Research Foundation DFG (Br609/13-1, research project “Open Ontologies and Open Knowledge Bases”) and by the Spanish National Plan of R+D, project no. TSI2005-08225-C07-0

    Introducing fuzzy trust for managing belief conflict over semantic web data

    Get PDF
    Interpreting Semantic Web Data by different human experts can end up in scenarios, where each expert comes up with different and conflicting ideas what a concept can mean and how they relate to other concepts. Software agents that operate on the Semantic Web have to deal with similar scenarios where the interpretation of Semantic Web data that describes the heterogeneous sources becomes contradicting. One such application area of the Semantic Web is ontology mapping where different similarities have to be combined into a more reliable and coherent view, which might easily become unreliable if the conflicting beliefs in similarities are not managed effectively between the different agents. In this paper we propose a solution for managing this conflict by introducing trust between the mapping agents based on the fuzzy voting model

    Quality of Information in Mobile Crowdsensing: Survey and Research Challenges

    Full text link
    Smartphones have become the most pervasive devices in people's lives, and are clearly transforming the way we live and perceive technology. Today's smartphones benefit from almost ubiquitous Internet connectivity and come equipped with a plethora of inexpensive yet powerful embedded sensors, such as accelerometer, gyroscope, microphone, and camera. This unique combination has enabled revolutionary applications based on the mobile crowdsensing paradigm, such as real-time road traffic monitoring, air and noise pollution, crime control, and wildlife monitoring, just to name a few. Differently from prior sensing paradigms, humans are now the primary actors of the sensing process, since they become fundamental in retrieving reliable and up-to-date information about the event being monitored. As humans may behave unreliably or maliciously, assessing and guaranteeing Quality of Information (QoI) becomes more important than ever. In this paper, we provide a new framework for defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the current state-of-the-art on the topic. We also outline novel research challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN

    Tracking Uncertainty Propagation from Model to Formalization: Illustration on Trust Assessment

    Get PDF
    International audienceThis paper investigates the use of the URREF ontology to characterize and track uncertainties arising within the modeling and formalization phases. Estimation of trust in reported information, a real-world problem of interest to practitioners in the field of security, was adopted for illustration purposes. A functional model of trust was developed to describe the analysis of reported information, and it was implemented with belief functions. When assessing trust in reported information, the uncertainty arises not only from the quality of sources or information content, but also due to the inability of models to capture the complex chain of interactions leading to the final outcome and to constraints imposed by the representation formalism. A primary goal of this work is to separate known approximations, imperfections and inaccuracies from potential errors, while explicitly tracking the uncertainty from the modeling to the formalization phases. A secondary goal is to illustrate how criteria of the URREF ontology can offer a basis for analyzing performances of fusion systems at early stages, ahead of implementation. Ideally, since uncertainty analysis runs dynamically, it can use the existence or absence of observed states and processes inducing uncertainty to adjust the tradeoff between precision and performance of systems on-the-fly

    Flow-based reputation with uncertainty: Evidence-Based Subjective Logic

    Full text link
    The concept of reputation is widely used as a measure of trustworthiness based on ratings from members in a community. The adoption of reputation systems, however, relies on their ability to capture the actual trustworthiness of a target. Several reputation models for aggregating trust information have been proposed in the literature. The choice of model has an impact on the reliability of the aggregated trust information as well as on the procedure used to compute reputations. Two prominent models are flow-based reputation (e.g., EigenTrust, PageRank) and Subjective Logic based reputation. Flow-based models provide an automated method to aggregate trust information, but they are not able to express the level of uncertainty in the information. In contrast, Subjective Logic extends probabilistic models with an explicit notion of uncertainty, but the calculation of reputation depends on the structure of the trust network and often requires information to be discarded. These are severe drawbacks. In this work, we observe that the `opinion discounting' operation in Subjective Logic has a number of basic problems. We resolve these problems by providing a new discounting operator that describes the flow of evidence from one party to another. The adoption of our discounting rule results in a consistent Subjective Logic algebra that is entirely based on the handling of evidence. We show that the new algebra enables the construction of an automated reputation assessment procedure for arbitrary trust networks, where the calculation no longer depends on the structure of the network, and does not need to throw away any information. Thus, we obtain the best of both worlds: flow-based reputation and consistent handling of uncertainties

    Managing Reputation in Collaborative Social Computing Applications

    Get PDF
    Reputation is a fundamental concept for making decisions about service providers. However, managing reputation in peer-to-peer distributed applications is not easy due to the lack of a central server that can compute this property from user opinions. Moreover, users have to marry this information with their individual trust in the service provider, which may be based on their past experiences, the opinions of their direct contacts, or both. This paper develops a reputation management system embedded in the Digital Avatars framework for collaborative social computing applications, using subjective logic. We show how the reputation of a given service provider can be calculated using the users’ opinions about it, and how this reputation can be explicitly represented, managed and combined with the trust that individual service requesters may have in them, in order to make better informed decisionsThis work is funded by the Spanish research projects PGC2018- 094905-B-100 and RTI2018-098780-B-I00

    How Does Science Come to Speak in the Courts? Citations Intertexts, Expert Witnesses, Consequential Facts, and Reasoning

    Get PDF
    Citations, in their highly conventionalized forms, visibly indicate each texts explicit use of the prior literature that embodies the knowledge and contentions of its field. This relation to prior texts has been called intertextuality in literary and literacy studies. Here, Bazerman discusses the citation practices and intertextuality in science and the law in theoretical and historical perspective, and considers the intersection of science and law by identifying the judicial rules that limit and shape the role of scientific literature in court proceedings. He emphasizes that from the historical and theoretical analysis, it is clear that, in the US, judicial reasoning is an intertextually tight and self-referring system that pays only limited attention to documents outside the laws, precedents, and judicial rules. The window for scientific literature to enter the courts is narrow, focused, and highly filtered. It serves as a warrant for the expert witnesses\u27 expertise, which in turn makes opinion admissible in a way not available to ordinary witnesses

    Using Norms To Control Open Multi-Agent Systems

    Full text link
    Internet es, tal vez, el avance científico más relevante de nuestros días. Entre otras cosas, Internet ha permitido la evolución de los paradigmas de computación tradicionales hacia el paradigma de computaciónn distribuida, que se caracteriza por utilizar una red abierta de ordenadores. Los sistemas multiagente (SMA) son una tecnolog a adecuada para abordar los retos motivados por estos sistemas abiertos distribuidos. Los SMA son aplicaciones formadas por agentes heterog eneos y aut onomos que pueden haber sido dise~nados de forma independiente de acuerdo con objetivos y motivaciones diferentes. Por lo tanto, no es posible realizar ninguna hip otesis a priori sobre el comportamiento de los agentes. Por este motivo, los SMA necesitan de mecanismos de coordinaci on y cooperaci on, como las normas, para garantizar el orden social y evitar la aparici on de conictos. El t ermino norma cubre dos dimensiones diferentes: i) las normas como un instrumento que gu a a los ciudadanos a la hora de realizar acciones y actividades, por lo que las normas de nen los procedimientos y/o los protocolos que se deben seguir en una situaci on concreta, y ii) las normas como ordenes o prohibiciones respaldadas por un sistema de sanciones, por lo que las normas son medios para prevenir o castigar ciertas acciones. En el area de los SMA, las normas se vienen utilizando como una especi caci on formal de lo que est a permitido, obligado y prohibido dentro de una sociedad. De este modo, las normas permiten regular la vida de los agentes software y las interacciones entre ellos. La motivaci on principal de esta tesis es permitir a los dise~nadores de los SMA utilizar normas como un mecanismo para controlar y coordinar SMA abiertos. Nuestro objetivo es elaborar mecanismos normativos a dos niveles: a nivel de agente y a nivel de infraestructura. Por lo tanto, en esta tesis se aborda primero el problema de la de nici on de agentes normativos aut onomos que sean capaces de deliberar acercaCriado Pacheco, N. (2012). Using Norms To Control Open Multi-Agent Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17800Palanci

    Nothing But the Truth? Experiments on Adversarial Competition, Expert Testimony, and Decision Making

    Get PDF
    Many scholars debate whether a competition between experts in legal, political, or economic contexts elicits truthful information and, in turn, enables people to make informed decisions. Thus, we analyze experimentally the conditions under which competition between experts induces the experts to make truthful statements and enables jurors listening to these statements to improve their decisions. Our results demonstrate that, contrary to game theoretic predictions and contrary to critics of our adversarial legal system, competition induces enough truth telling to allow jurors to improve their decisions. Then, when we impose additional institutions (such as penalties for lying or the threat of verification) on the competing experts, we observe even larger improvements in the experts\u27 propensity to tell the truth and in jurors\u27 decisions. We find similar improvements when the competing experts are permitted to exchange reasons for why their statements may be correct
    corecore