37 research outputs found

    Local and Global Contexts for Conversation

    Full text link
    The context in conversation is the dialog history crucial for multi-turn dialogue. Learning from the relevant contexts in dialog history for grounded conversation is a challenging problem. Local context is the most neighbor and more sensitive to the subsequent response, and global context is relevant to a whole conversation far beyond neighboring utterances. Currently, pretrained transformer models for conversation challenge capturing the correlation and connection between local and global contexts. We introduce a local and global conversation model (LGCM) for general-purpose conversation in open domain. It is a local-global hierarchical transformer model that excels at accurately discerning and assimilating the relevant contexts necessary for generating responses. It employs a local encoder to grasp the local context at the level of individual utterances and a global encoder to understand the broader context at the dialogue level. The seamless fusion of these locally and globally contextualized encodings ensures a comprehensive comprehension of the conversation. Experiments on popular datasets show that LGCM outperforms the existing conversation models on the performance of automatic metrics with significant margins.Comment: 11 pages, 3 figure

    An Approach to Generating Arguments over DL-Lite Ontologies

    Get PDF
    Argumentation frameworks for ontology reasoning and management have received extensive interests in the field of artificial intelligence in recent years. As one of the most popular argumentation frameworks, Besnard and Hunter's framework is built on arguments in form of where Phi is consistent and minimal for entailing phi. However, the problem about generating arguments over ontologies is still open. This paper presents an approach to generating arguments over DL-Lite ontologies by searching support paths in focal graphs. Moreover, theoretical results and examples are provided to ensure the correctness of this approach. Finally, we show this approach has the same complexity as propositional revision

    A Multi-Agent System for E-Business Processes Monitoring in a Web-Based Environment

    Get PDF
    In this paper, we present a multi-agent system MAGS for the e-business processes monitoring in a web-based environment. We classify the types of agents in MAGS by their monitoring capabilities. An algorithm is given to explain the mechanism of supervising and controlling the execution of business processes. An abstract model of alerts, which can give warnings of infringement on business policies, is proposed. Access control can also be realized by MAGS, which manifests in delivering different view of the business process to different roles participate in it. Being successfully adopted in a customer service management system, MAGS has been proven flexible and practical

    A Coherent and Paraconsistent Variant of the Default Logic ∗

    No full text
    Further information, including follow-up notes for some of the selected papers, can be found at: www.ucl.ac.uk/commonsense0

    Computing Inconsistency Measurements under Multi-Valued Semantics by Partial Max-SAT Solvers

    No full text
    International audienceMeasuring the inconsistency degree of a knowledge base can help us to deal with inconsistencies. Several inconsistency measures have been given under different multi-valued semantics, including 4-valued semantics, 3-valued semantics, LPm and Quasi Classical semantics. In this paper, we first carefully analyze the relationship between these inconsistency measures by showing that the inconsistency degrees under 4-valued semantics, 3-value semantics, LPm are the same, but different from the one based on Quasi Classical semantics. We then consider the computation of these inconsistency measures and show that computing inconsistency measurement under multi-valued semantics is usually intractable. To tackle this problem, we propose two novel algorithms that respectively encode the problems of computing inconsistency degrees under 4-valued semantics (3-valued semantics, LPm) and under Quasi Classical semantics into the partial Max-SAT problems. We implement these algorithms and do experiments on some benchmark data sets. The preliminary but encouraging experimental results show that our approach is efficient to handle large knowledge bases

    Incorporating Exponential Smoothing into MLP: A Simple but Effective Sequence Model

    Full text link
    Modeling long-range dependencies in sequential data is a crucial step in sequence learning. A recently developed model, the Structured State Space (S4), demonstrated significant effectiveness in modeling long-range sequences. However, It is unclear whether the success of S4 can be attributed to its intricate parameterization and HiPPO initialization or simply due to State Space Models (SSMs). To further investigate the potential of the deep SSMs, we start with exponential smoothing (ETS), a simple SSM, and propose a stacked architecture by directly incorporating it into an element-wise MLP. We augment simple ETS with additional parameters and complex field to reduce the inductive bias. Despite increasing less than 1\% of parameters of element-wise MLP, our models achieve comparable results to S4 on the LRA benchmark.Comment: 12 pages, 5 tables, 3 figure

    Algorithms for paraconsistent reasoning with OWL

    Get PDF
    In an open, constantly changing and collaborative environment like the forthcoming Semantic Web, it is reasonable to expect that knowledge sources will contain noise and inaccuracies. Practical reasoning techniques for ontologies therefore will have to be tolerant to this kind of data, including the ability to handle inconsistencies in a meaningful way. For this purpose, we employ paraconsistent reasoning based on four-valued logic, which is a classical method for dealing with inconsistencies in knowledge bases. Its transfer to OWL DL, however, necessitates the making of fundamental design choices in dealing with class inclusion, which has resulted in differing proposals for paraconsistent description logics in the literature. In this paper, we build on one of the more general approaches which due to its flexibility appears to be most promising for further investigations. We present two algorithms suitable for implementation, one based on a preprocessing before invoking a classical OWL reasoner, the other based on a modification of the KAON2 transformation algorithms. We also report on our implementation, called ParOWL
    corecore