187 research outputs found

    A Bayesian Abduction Model For Sensemaking

    Get PDF
    This research develops a Bayesian Abduction Model for Sensemaking Support (BAMSS) for information fusion in sensemaking tasks. Two methods are investigated. The first is the classical Bayesian information fusion with belief updating (using Bayesian clustering algorithm) and abductive inference. The second method uses a Genetic Algorithm (BAMSS-GA) to search for the k-best most probable explanation (MPE) in the network. Using various data from recent Iraq and Afghanistan conflicts, experimental simulations were conducted to compare the methods using posterior probability values which can be used to give insightful information for prospective sensemaking. The inference results demonstrate the utility of BAMSS as a computational model for sensemaking. The major results obtained are: (1) The inference results from BAMSS-GA gave average posterior probabilities that were 103 better than those produced by BAMSS; (2) BAMSS-GA gave more consistent posterior probabilities as measured by variances; and (3) BAMSS was able to give an MPE while BAMSS-GA was able to identify the optimal values for kMPEs. In the experiments, out of 20 MPEs generated by BAMSS, BAMSS-GA was able to identify 7 plausible network solutions resulting in less amount of information needed for sensemaking and reducing the inference search space by 7/20 (35%). The results reveal that GA can be used successfully in Bayesian information fusion as a search technique to identify those significant posterior probabilities useful for sensemaking. BAMSS-GA was also more robust in overcoming the problem of bounded search that is a constraint to Bayesian clustering and inference state space in BAMSS

    Verification of Pointer-Based Programs with Partial Information

    Get PDF
    The proliferation of software across all aspects of people's life means that software failure can bring catastrophic result. It is therefore highly desirable to be able to develop software that is verified to meet its expected specification. This has also been identified as a key objective in one of the UK Grand Challenges (GC6) (Jones et al., 2006; Woodcock, 2006). However, many difficult problems still remain in achieving this objective, partially due to the wide use of (recursive) shared mutable data structures which are hard to keep track of statically in a precise and concise way. This thesis aims at building a verification system for both memory safety and functional correctness of programs manipulating pointer-based data structures, which can deal with two scenarios where only partial information about the program is available. For instance the verifier may be supplied with only partial program specification, or with full specification but only part of the program code. For the first scenario, previous state-of-the-art works (Nguyen et al., 2007; Chin et al., 2007; Nguyen and Chin, 2008; Chin et al, 2010) generally require users to provide full specifications for each method of the program to be verified. Their approach seeks much intellectual effort from users, and meanwhile users are liable to make mistakes in writing such specifications. This thesis proposes a new approach to program verification that allows users to provide only partial specification to methods. Our approach will then refine the given annotation into a more complete specification by discovering missing constraints. The discovered constraints may involve both numerical and multiset properties that could be later confirmed or revised by users. Meanwhile, we further augment our approach by requiring only partial specification to be given for primary methods of a program. Specifications for loops and auxiliary methods can then be systematically discovered by our augmented mechanism, with the help of information propagated from the primary methods. This work is aimed at verifying beyond shape properties, with the eventual goal of analysing both memory safety and functional properties for pointer-based data structures. Initial experiments have confirmed that we can automatically refine partial specifications with non-trivial constraints, thus making it easier for users to handle specifications with richer properties. For the second scenario, many programs contain invocations to unknown components and hence only part of the program code is available to the verifier. As previous works generally require the whole of program code be present, we target at the verification of memory safety and functional correctness of programs manipulating pointer-based data structures, where the program code is only partially available due to invocations to unknown components. Provided with a Hoare-style specification ({Pre} prog {Post}) where program (prog) contains calls to some unknown procedure (unknown), we infer a specification (mspecu) for the unknown part (unknown) from the calling contexts, such that the problem of verifying program (prog) can be safely reduced to the problem of proving that the unknown procedure (unknown) (once its code is available) meets the derived specification (mspecu). The expected specification (mspecu) is automatically calculated using an abduction-based shape analysis specifically designed for a combined abstract domain. We have implemented a system to validate the viability of our approach, with encouraging experimental results

    XAI human-machine collaboration applied to network security

    Get PDF
    Cyber attacking is easier than cyber defending—attackers only need to find one breach, while the defenders must successfully repel all attacks. This research demonstrates how cyber defenders can increase their capabilities by joining forces with eXplainable-AI (XAI) utilizing interactive human-machine collaboration. With a global shortfall of cyber defenders there is a need to amplify their skills using AI. Cyber asymmetries make propositional machine learning techniques impractical. Human reasoning and skill is a key ingredient in defense and must be embedded in the AI framework. For Human-Machine collaboration to work requires that the AI is an ultra-strong machine learner and can explain its models. Unlike Deep Learning, Inductive Logic Programming can communicate what it learns to a human. An empirical study was undertaken using six months of eavesdropped network traffic from an organization generating up-to 562K network events daily. Easier-to-defend devices were identified using a form of the Good-Turing Frequency estimator which is a promising form of volatility measure. A behavioral cloning grammar in explicit symbolic form was then produced from a single device's network activity using the compression algorithm SEQUITUR. A novel visualization was generated to allow defenders to identify network sequences they wish to explain. Interactive Inductive Logic Programming (the XAI) is supplied the network traffic meta data, sophisticated pre-existing cyber security background knowledge, and one recurring sequence of events from a single device to explain. A co-inductive process between the human cyber defender and the XAI where the human is able to understand, then refute and shape the XAI's developing model, to produce a model that conforms with the data as well as the original device designers programming. The acceptable model is in a form that can be deployed as an ongoing active cyber defense

    Every normal logic program has a 2-valued semantics: theory, extensions, applications, implementations

    Get PDF
    Trabalho apresentado no âmbito do Doutoramento em Informática, como requisito parcial para obtenção do grau de Doutor em InformáticaAfter a very brief introduction to the general subject of Knowledge Representation and Reasoning with Logic Programs we analyse the syntactic structure of a logic program and how it can influence the semantics. We outline the important properties of a 2-valued semantics for Normal Logic Programs, proceed to define the new Minimal Hypotheses semantics with those properties and explore how it can be used to benefit some knowledge representation and reasoning mechanisms. The main original contributions of this work, whose connections will be detailed in the sequel, are: • The Layering for generic graphs which we then apply to NLPs yielding the Rule Layering and Atom Layering — a generalization of the stratification notion; • The Full shifting transformation of Disjunctive Logic Programs into (highly nonstratified)NLPs; • The Layer Support — a generalization of the classical notion of support; • The Brave Relevance and Brave Cautious Monotony properties of a 2-valued semantics; • The notions of Relevant Partial Knowledge Answer to a Query and Locally Consistent Relevant Partial Knowledge Answer to a Query; • The Layer-Decomposable Semantics family — the family of semantics that reflect the above mentioned Layerings; • The Approved Models argumentation approach to semantics; • The Minimal Hypotheses 2-valued semantics for NLP — a member of the Layer-Decomposable Semantics family rooted on a minimization of positive hypotheses assumption approach; • The definition and implementation of the Answer Completion mechanism in XSB Prolog — an essential component to ensure XSB’s WAM full compliance with the Well-Founded Semantics; • The definition of the Inspection Points mechanism for Abductive Logic Programs;• An implementation of the Inspection Points workings within the Abdual system [21] We recommend reading the chapters in this thesis in the sequence they appear. However, if the reader is not interested in all the subjects, or is more keen on some topics rather than others, we provide alternative reading paths as shown below. 1-2-3-4-5-6-7-8-9-12 Definition of the Layer-Decomposable Semantics family and the Minimal Hypotheses semantics (1 and 2 are optional) 3-6-7-8-10-11-12 All main contributions – assumes the reader is familiarized with logic programming topics 3-4-5-10-11-12 Focus on abductive reasoning and applications.FCT-MCTES (Fundação para a Ciência e Tecnologia do Ministério da Ciência,Tecnologia e Ensino Superior)- (no. SFRH/BD/28761/2006

    Advanced Automation for Space Missions

    Get PDF
    The feasibility of using machine intelligence, including automation and robotics, in future space missions was studied

    The 1995 Goddard Conference on Space Applications of Artificial Intelligence and Emerging Information Technologies

    Get PDF
    This publication comprises the papers presented at the 1995 Goddard Conference on Space Applications of Artificial Intelligence and Emerging Information Technologies held at the NASA/Goddard Space Flight Center, Greenbelt, Maryland, on May 9-11, 1995. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed

    Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment

    Full text link
    Ensuring alignment, which refers to making models behave in accordance with human intentions [1,2], has become a critical task before deploying large language models (LLMs) in real-world applications. For instance, OpenAI devoted six months to iteratively aligning GPT-4 before its release [3]. However, a major challenge faced by practitioners is the lack of clear guidance on evaluating whether LLM outputs align with social norms, values, and regulations. This obstacle hinders systematic iteration and deployment of LLMs. To address this issue, this paper presents a comprehensive survey of key dimensions that are crucial to consider when assessing LLM trustworthiness. The survey covers seven major categories of LLM trustworthiness: reliability, safety, fairness, resistance to misuse, explainability and reasoning, adherence to social norms, and robustness. Each major category is further divided into several sub-categories, resulting in a total of 29 sub-categories. Additionally, a subset of 8 sub-categories is selected for further investigation, where corresponding measurement studies are designed and conducted on several widely-used LLMs. The measurement results indicate that, in general, more aligned models tend to perform better in terms of overall trustworthiness. However, the effectiveness of alignment varies across the different trustworthiness categories considered. This highlights the importance of conducting more fine-grained analyses, testing, and making continuous improvements on LLM alignment. By shedding light on these key dimensions of LLM trustworthiness, this paper aims to provide valuable insights and guidance to practitioners in the field. Understanding and addressing these concerns will be crucial in achieving reliable and ethically sound deployment of LLMs in various applications

    What is “meta-” for? : a Peircean critique of the cognitive theory of metaphor

    Full text link
    My thesis aims to anatomize the cognitive theory of metaphor and suggests a Peircean semiotic perspective on metaphor study. As metaphorical essentialists, Lakoff/Johnson tend to universalize a limited number of conceptual metaphors and, by doing this, they overlook the dynamic relation between metaphorical tenor and vehicle. Such notion of metaphor is not compatible with the polysemous nature of the sign. The diversity and multivalency of metaphorical vehicle, in particular, cast serious doubts on the hypothesis of “conceptual metaphors” which, being meta-metaphorical constructs, can tell us nothing but a dry and empty formula “A is B”. Consequently, Lakoff/Johnson’s notion of conceptual metaphor is very much a Chomskyan postulation. Also problematic is the expedient experientialism or embodied philosophy they have put forward as a middle course between objectivism and subjectivism. What is missing from their framework is a structural space for dynamic interpretation on the part of metaphor users. In contrast, cognitive linguists may find in Peirce’s theory of the sign a sound solution to their theoretical impasse. As a logician, Peirce sees metaphor as the realization of iconic reasoning at the language level. His exposition on iconicity and iconic reasoning has laid a solid foundation upon which may be erected a fresh epistemology of metaphor fit for the contemporary study of language and mind. Broadly speaking, metaphor in Peirce can be examined from roughly two perspectives. Macroscopically, metaphor is an icon in general as opposed to index and symbol, whereas, microscopically, it is a subdivided hypoicon on the third level as opposed to image and diagram. Besides, Peirce also emphasized the subjective nature of metaphor. Semioticians after Peirce have further developed his theory on metaphor. For example, through his concept of “arbitrary iconicity”, Ersu Ding stresses the arbitrary nature of metaphorization and tries to shift our attention away from Lakoff/Johnson’s abstract epistemological Gestalt to the specific cultural contexts in which metaphors occur. Umberto Eco, on the other hand, sees interpretation of signs as an open-ended process that involves knowledge of all kinds. Encyclopedic knowledge thus serves as unlimited source for metaphorical association. For Eco, the meaning of a metaphor should be interpreted in the cultural framework based on a specific cultural community. Both Ding’s and Eco’s ideas are in line with Peirce’s theoretical framework where the meaning of a metaphor depends on an interpreter in a particular socio-historical context. They all realize that we should go beyond the ontology of metaphorical expressions to acquire a dynamic perspective on metaphor interpretation. To overcome the need for presupposing an omnipotent subject capable of knowing the metaphor-in-itself, we turn to Habermas’s theory of communicative action in which the meaning of metaphor is intersubjectively established through negotiation and communication. Moreover, we should not overlook the dynamic tension between metaphor and ideology. Aphoristically, we can say that nothing is a metaphor unless it is interpreted as a metaphor, and we need to reconnect metaphors with the specific cultural and ideological contexts in which they appear

    Fake news diffusion on digital channels: An analysis of attack strategies, responsibilities, and corporate responses

    Get PDF
    In recent years, the phenomenon of fake news has aroused a growing interest in the academic debate because of its capability to easily spread among digital channels, such as social media platforms, and reach and deceive an increasingly large target of digital users. In this scenario, fake news threatens the credibility of organizations, their products and services, the trust relationship between organizations and consumers, as well as the organizations internal community. Nowadays, organizations are subject to the risk of losing the control of their corporate communication strategies due to phenomena such as the spread of fake news. Hence, although in academic literature there is a growing interest about the impact of fake news, scholars agree that more research is needed to provide a better understanding of the fake news phenomenon. Indeed, hereto no study has focused on how fake news attacks the corporate reputation with reference to the different phases of the fake news life cycle. The aim of this PhD thesis is threefold: (1) to investigate how fake news, during its life cycle, attacks corporate reputation; (2) to identify the key actors involved in the stemming process of fake news and their role; (3) to identify the more effective response strategies of organizations threatened by fake news. To achieve the aim of this exploratory research, a mixed-method approach was adopted. In particular, a qualitative content analysis was conducted on a database of 454 fake news headlines; four longitudinal case studies were analyzed; a survey on a sample of Italian citizens was conducted to investigate their perception and the more effective response strategies of the organizations attacked by fake news. Findings of this research identify two types of borrowed credibility on which fake news leverages and two thematic clusters that characterize them. By crossing these dimensions, four different ideal types of fake news attack strategies emerged. Moreover, the results of this research highlight the weakness of the role of fact checkers, which are unable to access the filter bubbles in which fake news rapidly spreads – fact checkers and fake news branch out on two parallel channels, without crossing each other, and reaching different targets, by representing an ethical challenge for digital platforms such as social media. Finally, the findings of the survey show that it is a widespread and prevailing opinion between Italians that openness and transparency should be the key values of the response strategies. As a matter of fact, the clear answer from the survey respondents is that the best response strategy for the organization attacked by fake news is to be available in providing timely information

    Modelling causal reasoning

    Get PDF
    PhDAlthough human causal reasoning is widely acknowledged as an object of scientific enquiry, there is little consensus on an appropriate measure of progress. Up-to-date evidence of the standard method of research in the field shows that this method has been rejected at the birth of modern science. We describe an instance of the standard scientific method for modelling causal reasoning (causal calculators). The method allows for uniform proofs of three relevant computational properties: correctness of the model with respect to the intended model, full abstraction of the model (function) with respect to the equivalence of reasoning scenarios (input), and formal relations of equivalence and subsumption between models. The method extends and exploits the systematic paradigm [Handbook of Logic in Artificial Intelligence and Logic Programming, volume IV, p. 439-498, Oxford 1995] to fit with our interpretation of it. Using the described method, we present results for some major models, with an updated summary spanning seventy-two years of research in the field
    corecore