238 research outputs found

    Penguins Don't Fly: Reasoning about Generics through Instantiations and Exceptions

    Full text link
    Generics express generalizations about the world (e.g., birds can fly) that are not universally true (e.g., newborn birds and penguins cannot fly). Commonsense knowledge bases, used extensively in NLP, encode some generic knowledge but rarely enumerate such exceptions and knowing when a generic statement holds or does not hold true is crucial for developing a comprehensive understanding of generics. We present a novel framework informed by linguistic theory to generate exemplars -- specific cases when a generic holds true or false. We generate ~19k exemplars for ~650 generics and show that our framework outperforms a strong GPT-3 baseline by 12.8 precision points. Our analysis highlights the importance of linguistic theory-based controllability for generating exemplars, the insufficiency of knowledge bases as a source of exemplars, and the challenges exemplars pose for the task of natural language inference.Comment: EACL 202

    Extracting Emotions from Users' Annotations in Virtual Museums: a Case Study on the Pop-up Virtual Museum of the Design Museum Helsinki

    Get PDF
    The paper presents a combined approach to knowledge-based emotion attribution and classification of cultural items employed in the H2020 EU project SPICE (Social cohesion, Participation, and Inclusion through Cultural Engagement) (https://spice-h2020.eu) In particular, we describe an experimentation conducted on a selection of items contributed by the virtual museum (Pop-up VR Museum) of Finnish design objects, created by the Design Museum Helsinki in cooperation with the Aalto University. The results show an overlapping between the emotional labels extracted from the user-generated stories attached to the objects in the collection and the emotional annotations created by the audience during the virtual visit of the collection

    Theoretical Analysis and Implementation of Abstract Argumentation Frameworks with Domain Assignments

    Get PDF
    A representational limitation of current argumentation frameworks is their inability to deal with sets of entities and their properties, for example to express that an argument is applicable for a specific set of entities that have a certain property and not applicable for all the others. In order to address this limitation, we recently introduced Abstract Argumentation Frameworks with Domain Assignments (AAFDs), which extend Abstract Argumentation Frameworks (AAFs) by assigning to each argument a domain of application, i.e., a set of entities for which the argument is believed to apply. We provided formal definitions of AAFDs and their semantics, showed with examples how this model can support various features of commonsense and non-monotonic reasoning, and studied its relation to AAFs. In this paper, aiming to provide a deeper insight into this new model, we present more results on the relation between AAFDs and AAFs and the properties of the AAFD semantics, and we introduce an alternative, more expressive way to define the domains of arguments using logical predicates. We also offer an implementation of AAFDs based on Answer Set Programming (ASP) and evaluate it using a range of experiments with synthetic datasets

    Contelog: A Formal Declarative Framework for Contextual Knowledge Representation and Reasoning

    Get PDF
    Context-awareness is at the core of providing timely adaptations in safety-critical secure applications of pervasive computing and Artificial Intelligence (AI) domains. In the current AI and application context-aware frameworks, the distinction between knowledge and context are blurred and not formally integrated. As a result, adaptation behaviors based on contextual reasoning cannot be formally derived and reasoned about. Also, in many smart systems such as automated manufacturing, decision making, and healthcare, it is essential for context-awareness units to synchronize with contextual reasoning modules to derive new knowledge in order to adapt, alert, and predict. A rigorous formalism is therefore essential to (1) represent contextual domain knowledge as well as application rules, and (2) efficiently and effectively reason to draw contextual conclusions. This thesis is a contribution in this direction. The thesis introduces first a formal context representation and a context calculus used to build context models for applications. Then, it introduces query processing and optimization techniques to perform context-based reasoning. The formal framework that achieves these two tasks is called Contelog Framework, obtained by a conservative extension of the syntax and semantics of Datalog. It models contextual knowledge and infers new knowledge. In its design, contextual knowledge and contextual reasoning are loosely coupled, and hence contextual knowledge is reusable on its own. The significance is that by fixing the contextual knowledge, rules in the program and/or query may be changed. Contelog provides a theory of context, in a way that is independent of the application logic rules. The context calculus developed in this thesis allows exporting knowledge inferred in one context to be used in another context. Following the idea of Magic sets from Datalog, Magic Contexts together with query rewriting algorithms are introduced to optimize bottom-up query evaluation of Contelog programs. A Book of Examples has been compiled for Contelog, and these examples are implemented to showcase a proof of concept for the generality, expressiveness, and rigor of the proposed Contelog framework. A variety of experiments that compare the performance of Contelog with earlier Datalog implementations reveal a significant improvement and bring out practical merits of current stage of Contelog and its potential for future extensions in context representation and reasoning of emerging applications of context-aware computing

    Vector Semantics

    Get PDF
    This open access book introduces Vector semantics, which links the formal theory of word vectors to the cognitive theory of linguistics. The computational linguists and deep learning researchers who developed word vectors have relied primarily on the ever-increasing availability of large corpora and of computers with highly parallel GPU and TPU compute engines, and their focus is with endowing computers with natural language capabilities for practical applications such as machine translation or question answering. Cognitive linguists investigate natural language from the perspective of human cognition, the relation between language and thought, and questions about conceptual universals, relying primarily on in-depth investigation of language in use. In spite of the fact that these two schools both have ‘linguistics’ in their name, so far there has been very limited communication between them, as their historical origins, data collection methods, and conceptual apparatuses are quite different. Vector semantics bridges the gap by presenting a formal theory, cast in terms of linear polytopes, that generalizes both word vectors and conceptual structures, by treating each dictionary definition as an equation, and the entire lexicon as a set of equations mutually constraining all meanings

    Architectural Data Flow Analysis for Detecting Violations of Confidentiality Requirements

    Get PDF
    Software vendors must consider confidentiality especially while creating software architectures because decisions made here are hard to change later. Our approach represents and analyzes data flows in software architectures. Systems specify data flows and confidentiality requirements specify limitations of data flows. Software architects use detected violations of these limitations to improve the system. We demonstrate how to integrate our approach into existing development processes

    Head-Driven Phrase Structure Grammar

    Get PDF
    Head-Driven Phrase Structure Grammar (HPSG) is a constraint-based or declarative approach to linguistic knowledge, which analyses all descriptive levels (phonology, morphology, syntax, semantics, pragmatics) with feature value pairs, structure sharing, and relational constraints. In syntax it assumes that expressions have a single relatively simple constituent structure. This volume provides a state-of-the-art introduction to the framework. Various chapters discuss basic assumptions and formal foundations, describe the evolution of the framework, and go into the details of the main syntactic phenomena. Further chapters are devoted to non-syntactic levels of description. The book also considers related fields and research areas (gesture, sign languages, computational linguistics) and includes chapters comparing HPSG with other frameworks (Lexical Functional Grammar, Categorial Grammar, Construction Grammar, Dependency Grammar, and Minimalism)

    Architectural Data Flow Analysis for Detecting Violations of Confidentiality Requirements

    Get PDF
    Software vendors must consider confidentiality especially while creating software architectures because decisions made here are hard to change later. Our approach represents and analyzes data flows in software architectures. Systems specify data flows and confidentiality requirements specify limitations of data flows. Software architects use detected violations of these limitations to improve the system. We demonstrate how to integrate our approach into existing development processes

    Architectural Data Flow Analysis for Detecting Violations of Confidentiality Requirements

    Get PDF
    Diese Arbeit prĂ€sentiert einen Ansatz zur systematischen BerĂŒcksichtigung von Vertraulichkeitsanforderungen in Softwarearchitekturen mittels Abbildung und Analyse von DatenflĂŒssen. Die StĂ€rkung von Datenschutzregularien, wie bspw. durch die europĂ€ische Datenschutzgrundverordnung (DSGVO), und die Reaktionen der Bevölkerung auf Datenskandale, wie bspw. den Skandal um Cambridge Analytica, haben gezeigt, dass die Wahrung von Vertraulichkeit fĂŒr Organisationen von essentieller Bedeutung ist. Um Vertraulichkeit zu wahren, muss diese wĂ€hrend des gesamten Softwareentwicklungsprozesses berĂŒcksichtigt werden. FrĂŒhe Entwicklungsphasen benötigen hier insbesondere große Beachtung, weil ein betrĂ€chtlicher Anteil an spĂ€teren Problemen auf Fehler in diesen frĂŒhen Entwicklungsphasen zurĂŒckzufĂŒhren ist. Hinzu kommt, dass der Aufwand zum Beseitigen von Fehlern aus der Softwarearchitektur in spĂ€teren Entwicklungsphasen ĂŒberproportional steigt. Um Verletzungen von Vertraulichkeitsanforderungen zu erkennen, werden in frĂŒheren Entwicklungsphasen hĂ€ufig datenorientierte Dokumentationen der Softwaresysteme verwendet. Dies kommt daher, dass die Untersuchung einer solchen Verletzung hĂ€ufig erfordert, DatenflĂŒssen zu folgen. Datenflussdiagramme (DFDs) werden gerne genutzt, um Sicherheit im Allgemeinen und Vertraulichkeit im Speziellen zu untersuchen. Allerdings sind reine DFDs noch nicht ausreichend, um darauf aufbauende Analysen zu formalisieren und zu automatisieren. Stattdessen mĂŒssen DFDs oder auch andere Architekturbeschreibungssprachen (ADLs) erweitert werden, um die zur Untersuchung von Vertraulichkeit notwendigen Informationen reprĂ€sentieren zu können. Solche Erweiterungen unterstĂŒtzen hĂ€ufig nur Vertraulichkeitsanforderungen fĂŒr genau einen Vertraulichkeitsmechanismus wie etwa Zugriffskontrolle. Eine Kombination von Mechanismen unterstĂŒtzen solche auf einen einzigen Zweck fokussierten Erweiterungen nicht, was deren AusdrucksmĂ€chtigkeit einschrĂ€nkt. Möchte ein Softwarearchitekt oder eine Softwarearchitektin den eingesetzten Vertraulichkeitsmechanismus wechseln, muss er oder sie auch die ADL wechseln, was mit hohem Aufwand fĂŒr das erneute Modellieren der Softwarearchitektur einhergeht. DarĂŒber hinaus bieten viele AnalyseansĂ€tze keine Integration in bestehende ADLs und Entwicklungsprozesse. Ein systematischer Einsatz eines solchen Ansatzes wird dadurch deutlich erschwert. Existierende, datenorientierte AnsĂ€tze bauen entweder stark auf manuelle AktivitĂ€ten und hohe Expertise oder unterstĂŒtzen nicht die gleichzeitige ReprĂ€sentation von Zugriffs- und Informationsflusskontrolle, sowie VerschlĂŒsselung im selben Artefakt zur Architekturspezifikation. Weil die genannten Vertraulichkeitsmechanismen am verbreitetsten sind, ist es wahrscheinlich, dass Softwarearchitekten und Softwarearchitektinnen an der Nutzung all dieser Mechanismen interessiert sind. Die erwĂ€hnten, manuellen TĂ€tigkeiten umfassen u.a. die Identifikation von Verletzungen mittels Inspektionen und das Nachverfolgen von Daten durch das System. Beide TĂ€tigkeiten benötigen ein betrĂ€chtliches Maß an Erfahrung im Bereich Vertraulichkeit. Wir adressieren in dieser Arbeit die zuvor genannten Probleme mittels vier BeitrĂ€gen: Zuerst prĂ€sentieren wir eine Erweiterung der DFD-Syntax, durch die die zur Untersuchung von Zugriffs- und Informationsflusskontrolle, sowie VerschlĂŒsselung notwendigen Informationen mittels Eigenschaften und Verhaltensbeschreibungen innerhalb des selben Artefakts zur Architekturspezifikation ausgedrĂŒckt werden können. Zweitens stellen wir eine Semantik dieser erweiterten DFD-Syntax vor, die das Verhalten von DFDs ĂŒber die Ausbreitung von Attributen (engl.: label propagation) formalisiert und damit eine automatisierte RĂŒckverfolgung von Daten ermöglicht. Drittens prĂ€sentieren wir Analysedefinitionen, die basierend auf der DFD-Syntax und -Semantik Verletzungen von Vertraulichkeitsanforderungen identifizieren kann. Die unterstĂŒtzten Vertraulichkeitsanforderungen decken die wichtigsten Varianten von Zugriffs- und Informationsflusskontrolle, sowie VerschlĂŒsselung ab. Viertens stellen wir einen Leitfaden zur Integration des Rahmenwerks fĂŒr datenorientierte Analysen in bestehende ADLs und deren zugehörige Entwicklungsprozesse vor. Das Rahmenwerk besteht aus den vorherigen drei BeitrĂ€gen. Die Validierung der AusdrucksmĂ€chtigkeit, der ErgebnisqualitĂ€t und des Modellierungsaufwands unserer BeitrĂ€ge erfolgt fallstudienbasiert auf siebzehn Fallstudiensystemen. Die Fallstudiensysteme stammen grĂ¶ĂŸtenteils aus verwandten Arbeiten und decken fĂŒnf Arten von Zugriffskontrollanforderungen, vier Arten von Informationsflussanforderungen, zwei Arten von VerschlĂŒsselung und Anforderungen einer Kombination beider Vertraulichkeitsmechanismen ab. Wir haben die AusdrucksmĂ€chtigkeit der DFD-Syntax, sowie der mittels des Integrationsleitfadens erstellten ADLs validiert und konnten alle außer ein Fallstudiensystem reprĂ€sentieren. Wir konnten außerdem die Vertraulichkeitsanforderungen von sechzehn Fallstudiensystemen mittels unserer Analysedefinitionen reprĂ€sentieren. Die DFD-basierten, sowie die ADL-basierten Analysen lieferten die erwarteten Ergebnisse, was eine hohe ErgebnisqualitĂ€t bedeutet. Den Modellierungsaufwand in den erweiterten ADLs validierten wir sowohl fĂŒr das HinzufĂŒgen, als auch das Wechseln eines Vertraulichkeitsmechanismus bei einer bestehenden Softwarearchitektur. In beiden Validierungen konnten wir zeigen, dass die ADL-Integrationen Modellierungsaufwand einsparen, indem betrĂ€chtliche Teile bestehender Softwarearchitekturen wiederverwendet werden können. Von unseren BeitrĂ€gen profitieren Softwarearchitekten durch gesteigerte FlexibilitĂ€t bei der Auswahl von Vertraulichkeitsmechanismen, sowie beim Wechsel zwischen diesen Mechanismen. Die frĂŒhe Identifikation von Vertraulichkeitsverletzungen verringert darĂŒber hinaus den Aufwand zum Beheben der zugrundeliegenden Probleme

    A Complementary Account to Emotion Extraction and Classification in Cultural Heritage Based on the Plutchik’s Theory

    Get PDF
    The paper presents a combined approach to knowledge-based emotion attribution and classification of cultural items employed in the H2020 project SPICE. In particular, we show a preliminary experimentation conducted on a selection of items contributed by the GAM Museum in Turin (Galleria di Arte Moderna), pointing out how different language-based approaches to emotion categorization (used in the systems Sophia and DEGARI respectively) can be powerfully combined to cope with both coverage and extended affective attributions. Interestingly, both approaches are based on an ontology of the Plutchik’s theory of emotions
    • 

    corecore