2,705 research outputs found
Multi-agent Confidential Abductive Reasoning
In the context of multi-agent hypothetical reasoning, agents typically have partial knowledge about their environments, and the union of such knowledge is still incomplete to represent the whole world. Thus, given a global query they collaborate with each other to make correct inferences and hypothesis, whilst maintaining global constraints. Most collaborative reasoning systems operate on the assumption that agents can share or communicate any information they have. However, in application domains like multi-agent systems for healthcare or distributed software agents for security policies in coalition networks, confidentiality of knowledge is an additional
primary concern. These agents are required to collaborately compute consistent answers for a query whilst preserving their own private information. This paper addresses this issue showing how this dichotomy between "open communication" in collaborative reasoning and protection of confidentiality can be accommodated. We present a general-purpose distributed abductive logic programming system for multi-agent hypothetical reasoning with confidentiality. Specifically, the system computes consistent conditional answers for a query over a set of distributed normal logic programs with possibly unbound domains and arithmetic constraints, preserving the private information within the logic programs. A case study on security policy analysis in distributed coalition networks is described, as an example of many applications of this system
Distributed Abductive Reasoning: Theory, Implementation and Application
Abductive reasoning is a powerful logic inference mechanism that allows assumptions to be
made during answer computation for a query, and thus is suitable for reasoning over incomplete
knowledge. Multi-agent hypothetical reasoning is the application of abduction in a distributed
setting, where each computational agent has its local knowledge representing partial world and
the union of all agents' knowledge is still incomplete. It is different from simple distributed
query processing because the assumptions made by the agents must also be consistent with
global constraints.
Multi-agent hypothetical reasoning has many potential applications, such as collaborative planning
and scheduling, distributed diagnosis and cognitive perception. Many of these applications
require the representation of arithmetic constraints in their problem specifications as well as
constraint satisfaction support during the computation. In addition, some applications may
have confidentiality concerns as restrictions on the information that can be exchanged between
the agents during their collaboration. Although a limited number of distributed abductive systems
have been developed, none of them is generic enough to support the above requirements.
In this thesis we develop, in the spirit of Logic Programming, a generic and extensible distributed
abductive system that has the potential to target a wide range of distributed problem
solving applications. The underlying distributed inference algorithm incorporates constraint
satisfaction and allows non-ground conditional answers to be computed. Its soundness and
completeness have been proved. The algorithm is customisable in that different inference and
coordination strategies (such as goal selection and agent selection strategies) can be adopted
while maintaining correctness. A customisation that supports confidentiality during problem
solving has been developed, and is used in application domains such as distributed security
policy analysis. Finally, for evaluation purposes, a
flexible experimental environment has been
built for automatically generating different classes of distributed abductive constraint logic programs.
This environment has been used to conduct empirical investigation of the performance
of the customised system
A Temporal Framework for Hypergame Analysis of Cyber Physical Systems in Contested Environments
Game theory is used to model conflicts between one or more players over resources. It offers players a way to reason, allowing rationale for selecting strategies that avoid the worst outcome. Game theory lacks the ability to incorporate advantages one player may have over another player. A meta-game, known as a hypergame, occurs when one player does not know or fully understand all the strategies of a game. Hypergame theory builds upon the utility of game theory by allowing a player to outmaneuver an opponent, thus obtaining a more preferred outcome with higher utility. Recent work in hypergame theory has focused on normal form static games that lack the ability to encode several realistic strategies. One example of this is when a player’s available actions in the future is dependent on his selection in the past. This work presents a temporal framework for hypergame models. This framework is the first application of temporal logic to hypergames and provides a more flexible modeling for domain experts. With this new framework for hypergames, the concepts of trust, distrust, mistrust, and deception are formalized. While past literature references deception in hypergame research, this work is the first to formalize the definition for hypergames. As a demonstration of the new temporal framework for hypergames, it is applied to classical game theoretical examples, as well as a complex supervisory control and data acquisition (SCADA) network temporal hypergame. The SCADA network is an example includes actions that have a temporal dependency, where a choice in the first round affects what decisions can be made in the later round of the game. The demonstration results show that the framework is a realistic and flexible modeling method for a variety of applications
A Survey on Understanding and Representing Privacy Requirements in the Internet-of-Things
People are interacting with online systems all the time. In order to use the services being provided, they give consent for their data to be collected. This approach requires too much human effort and is impractical for systems like Internet-of-Things (IoT) where human-device interactions can be large. Ideally, privacy assistants can help humans make privacy decisions while working in collaboration with them. In our work, we focus on the identification and representation of privacy requirements in IoT to help privacy assistants better understand their environment. In recent years, more focus has been on the technical aspects of privacy. However, the dynamic nature of privacy also requires a representation of social aspects (e.g., social trust). In this survey paper, we review the privacy requirements represented in existing IoT ontologies. We discuss how to extend these ontologies with new requirements to better capture privacy, and we introduce case studies to demonstrate the applicability of the novel requirements
Securing intellectual capital:an exploratory study in Australian universities
Purpose – To investigate the links between IC and the protection of data, information and knowledge in universities, as organizations with unique knowledge-related foci and challenges.Design/methodology/approach – We gathered insights from existing IC-related research publications to delineate key foundational aspects of IC, identify and propose links to traditional information security that impact the protection of IC. We conducted interviews with key stakeholders in Australian universities in order to validate these links.Findings – Our investigation revealed two kinds of embeddedness characterizing the organizational fabric of universities: (1) vertical and (2) horizontal, with an emphasis on the connection between these and IC-related knowledge protection within these institutions.Research implications – There is a need to acknowledge the different roles played by actors within the university, and the relevance of information security to IC-related preservation.Practical implications – Framing information security as an IC-related issue can help IT security managers communicate the need for knowledge security with executives in higher education, and secure funding to preserve and secure such IC-related knowledge, once its value is recognized.Originality/value – This is one of the first studies to explore the connections between data and information security and the three core components of IC’s knowledge security in the university context
Toward an Analysis of the Abductive Moral Argument for God’s Existence: Assessing the Evidential Quality of Moral Phenomena and the Evidential Virtuosity of Christian Theological Models
The moral argument for God’s existence is perhaps the oldest and most salient of the arguments from natural theology. In contemporary literature, there has been a focus on the abductive version of the moral argument. Although the mode of reasoning, abduction, has been articulated, there has not been a robust articulation of the individual components of the argument. Such an articulation would include the data quality of moral phenomena, the theoretical virtuosity of theological models that explain the moral phenomena, and how both contribute to the likelihood of moral arguments. The goal of this paper is to provide such an articulation. Our method is to catalog the phenomena, sort them by their location on the emergent hierarchy of sciences, then describe how the ecumenical Christian theological model exemplifies evidential virtues in explaining them. Our results show that moral arguments are neither of the highest or lowest quality yet can be assented to on a principled level of investigation, especially given existential considerations
Recommended from our members
Assessing the genuineness of events in runtime monitoring of cyber systems
Monitoring security properties of cyber systems at runtime is necessary if the preservation of such properties cannot be guaranteed by formal analysis of their specification. It is also necessary if the runtime interactions between their components that are distributed over different types of local and wide area networks cannot be fully analysed before putting the systems in operation. The effectiveness of runtime monitoring depends on the trustworthiness of the runtime system events, which are analysed by the monitor. In this paper, we describe an approach for assessing the trustworthiness of such events. Our approach is based on the generation of possible explanations of runtime events based on a diagnostic model of the system under surveillance using abductive reasoning, and the confirmation of the validity of such explanations and the runtime events using belief based reasoning. The assessment process that we have developed based on this approach has been implemented as part of the EVEREST runtime monitoring framework and has been evaluated in a series of simulations that are discussed in the paper
- …