358,955 research outputs found

    The Ontology for Agents, Systems and Integration of Services: OASIS version 2

    Full text link
    Semantic representation is a key enabler for several application domains, and the multi-agent systems realm makes no exception. Among the methods for semantically representing agents, one has been essentially achieved by taking a behaviouristic vision, through which one can describe how they operate and engage with their peers. The approach essentially aims at defining the operational capabilities of agents through the mental states related with the achievement of tasks. The OASIS ontology -- An Ontology for Agent, Systems, and Integration of Services, presented in 2019 -- pursues the behaviouristic approach to deliver a semantic representation system and a communication protocol for agents and their commitments. This paper reports on the main modeling choices concerning the representation of agents in OASIS 2, the latest major upgrade of OASIS, and the achievement reached by the ontology since it was first introduced, in particular in the context of ontologies for blockchains.Comment: Already published on Intelligenza Artificiale, Vol. 17, no 1, pp. 51-62, 2023. DOI 10.3233/IA-23000

    An Abstract Formal Basis for Digital Crowds

    Get PDF
    Crowdsourcing, together with its related approaches, has become very popular in recent years. All crowdsourcing processes involve the participation of a digital crowd, a large number of people that access a single Internet platform or shared service. In this paper we explore the possibility of applying formal methods, typically used for the verification of software and hardware systems, in analysing the behaviour of a digital crowd. More precisely, we provide a formal description language for specifying digital crowds. We represent digital crowds in which the agents do not directly communicate with each other. We further show how this specification can provide the basis for sophisticated formal methods, in particular formal verification.Comment: 32 pages, 4 figure

    A Connectionist Theory of Phenomenal Experience

    Get PDF
    When cognitive scientists apply computational theory to the problem of phenomenal consciousness, as many of them have been doing recently, there are two fundamentally distinct approaches available. Either consciousness is to be explained in terms of the nature of the representational vehicles the brain deploys; or it is to be explained in terms of the computational processes defined over these vehicles. We call versions of these two approaches vehicle and process theories of consciousness, respectively. However, while there may be space for vehicle theories of consciousness in cognitive science, they are relatively rare. This is because of the influence exerted, on the one hand, by a large body of research which purports to show that the explicit representation of information in the brain and conscious experience are dissociable, and on the other, by the classical computational theory of mind – the theory that takes human cognition to be a species of symbol manipulation. But two recent developments in cognitive science combine to suggest that a reappraisal of this situation is in order. First, a number of theorists have recently been highly critical of the experimental methodologies employed in the dissociation studies – so critical, in fact, it’s no longer reasonable to assume that the dissociability of conscious experience and explicit representation has been adequately demonstrated. Second, classicism, as a theory of human cognition, is no longer as dominant in cognitive science as it once was. It now has a lively competitor in the form of connectionism; and connectionism, unlike classicism, does have the computational resources to support a robust vehicle theory of consciousness. In this paper we develop and defend this connectionist vehicle theory of consciousness. It takes the form of the following simple empirical hypothesis: phenomenal experience consists in the explicit representation of information in neurally realized PDP networks. This hypothesis leads us to re-assess some common wisdom about consciousness, but, we will argue, in fruitful and ultimately plausible ways

    Modelling Users, Intentions, and Structure in Spoken Dialog

    Full text link
    We outline how utterances in dialogs can be interpreted using a partial first order logic. We exploit the capability of this logic to talk about the truth status of formulae to define a notion of coherence between utterances and explain how this coherence relation can serve for the construction of AND/OR trees that represent the segmentation of the dialog. In a BDI model we formalize basic assumptions about dialog and cooperative behaviour of participants. These assumptions provide a basis for inferring speech acts from coherence relations between utterances and attitudes of dialog participants. Speech acts prove to be useful for determining dialog segments defined on the notion of completing expectations of dialog participants. Finally, we sketch how explicit segmentation signalled by cue phrases and performatives is covered by our dialog model.Comment: 17 page

    Story Ending Generation with Incremental Encoding and Commonsense Knowledge

    Full text link
    Generating a reasonable ending for a given story context, i.e., story ending generation, is a strong indication of story comprehension. This task requires not only to understand the context clues which play an important role in planning the plot but also to handle implicit knowledge to make a reasonable, coherent story. In this paper, we devise a novel model for story ending generation. The model adopts an incremental encoding scheme to represent context clues which are spanning in the story context. In addition, commonsense knowledge is applied through multi-source attention to facilitate story comprehension, and thus to help generate coherent and reasonable endings. Through building context clues and using implicit knowledge, the model is able to produce reasonable story endings. context clues implied in the post and make the inference based on it. Automatic and manual evaluation shows that our model can generate more reasonable story endings than state-of-the-art baselines.Comment: Accepted in AAAI201

    Representational Kinds

    Get PDF
    Many debates in philosophy focus on whether folk or scientific psychological notions pick out cognitive natural kinds. Examples include memory, emotions and concepts. A potentially interesting type of kind is: kinds of mental representations (as opposed, for example, to kinds of psychological faculties). In this chapter we outline a proposal for a theory of representational kinds in cognitive science. We argue that the explanatory role of representational kinds in scientific theories, in conjunction with a mainstream approach to explanation in cognitive science, suggest that representational kinds are multi-level. This is to say that representational kinds’ properties cluster at different levels of explanation and allow for intra- and inter-level projections
    • …
    corecore