38 research outputs found
Presupposition And Entailment In "The Family Nightmare" Short Story
The focus of this research is presupposition and entailment in The Familiy Nightmare short story. This research aimed to reveal how presupposition and entailment were used in the short story. This research used the qualitative method for analyzing the story which involved the document and material analysis to collecting the data. The result showed this research found 6 types of presuppositions and 2 types of entailments. Presupposition and entailment are to emphasize, draw attention, sympathy toward the readers, and become a strategy to make the readers more focused in the story
Modeling of query languages and applications in code refactoring and code optimization
ΠΡΠΎΠ±Π»Π΅ΠΌ ΡΠ°Π΄ΡΠΆΠ°Π½ΠΎΡΡΠΈ ΡΠΏΠΈΡΠ° ΡΠ΅Π΄Π°Π½ ΡΠ΅ ΠΎΠ΄ ΡΡΠ½Π΄Π°ΠΌΠ΅Π½ΡΠ°Π»Π½ΠΈΡ
ΠΏΡΠΎΠ±Π»Π΅ΠΌΠ° Ρ ΡΠ°ΡΡΠ½Π°Ρ-
ΡΠΊΠΈΠΌ Π½Π°ΡΠΊΠ°ΠΌΠ°, ΠΈΠ½ΠΈΡΠΈΡΠ°Π»Π½ΠΎ Π΄Π΅ΡΠΈΠ½ΠΈΡΠ°Π½ Π·Π° ΡΠ΅Π»Π°ΡΠΈΠΎΠ½Π΅ ΡΠΏΠΈΡΠ΅. Π‘Π° ΡΠ°ΡΡΡΡΠΎΠΌ ΠΏΠΎΠΏΡΠ»Π°ΡΠ½ΠΎΡΡΡ
SPARQL ΡΠΏΠΈΡΠ½ΠΎΠ³ ΡΠ΅Π·ΠΈΠΊΠ°, ΠΏΡΠΎΠ±Π»Π΅ΠΌ ΠΏΠΎΡΡΠ°ΡΠ΅ ΡΠ΅Π»Π΅Π²Π°Π½ΡΠ°Π½ ΠΈ Π°ΠΊΡΡΠ΅Π»Π°Π½ ΠΈ Ρ ΠΎΠ²ΠΎΠΌ Π½ΠΎΠ²ΠΎΠΌ ΠΊΠΎΠ½ΡΠ΅ΠΊΡΡΡ.
Π£ ΡΠ΅Π·ΠΈ ΡΠ΅ ΠΏΡΠ΅Π΄ΡΡΠ°Π²ΡΠ΅Π½ ΠΎΡΠΈΠ³ΠΈΠ½Π°Π»Π½ΠΈ ΠΏΡΠΈΡΡΡΠΏ ΡΠ΅ΡΠ°Π²Π°ΡΡ ΠΎΠ²ΠΎΠ³ ΠΏΡΠΎΠ±Π»Π΅ΠΌΠ° Π·Π°ΡΠ½ΠΎΠ²Π°Π½ Π½Π° ΡΠ²ΠΎ-
ΡΠ΅ΡΡ Π½Π° Π·Π°Π΄ΠΎΠ²ΠΎΡΠΈΠ²ΠΎΡΡ Ρ Π»ΠΎΠ³ΠΈΡΠΈ ΠΏΡΠ²ΠΎΠ³ ΡΠ΅Π΄Π°. ΠΠΎΠ΄ΡΠΆΠ°Π½Π° ΡΠ΅ ΡΠ°Π΄ΡΠΆΠ°Π½ΠΎΡΡ ΡΠΏΠΈΡΠ° ΡΠ·ΠΈΠΌΠ°ΡΡΡΠΈ
Ρ ΠΎΠ±Π·ΠΈΡ RDF ΡΡ
Π΅ΠΌΡ, Π° ΡΠ°Π·ΠΌΠ°ΡΡΠ° ΡΠ΅ ΠΈ ΡΠ΅Π»Π°ΡΠΈΡΠ° ΡΡΠ°ΠΏΠ°ΡΠ°, ΠΊΠ°ΠΎ ΡΠ»Π°Π±ΠΈΡΠ° ΡΠΎΡΠΌΠ° ΡΠ°Π΄ΡΠΆΠ°Π½ΠΎΡΡΠΈ.
ΠΠΎΠΊΠ°Π·Π°Π½Π° ΡΠ΅ ΡΠ°Π³Π»Π°ΡΠ½ΠΎΡΡ ΠΈ ΠΏΠΎΡΠΏΡΠ½ΠΎΡΡ ΠΏΡΠ΅Π΄Π»ΠΎΠΆΠ΅Π½ΠΎΠ³ ΠΏΡΠΈΡΡΡΠΏΠ° Π½Π° ΡΠΈΡΠΎΠΊΠΎΠΌ ΡΠΏΠ΅ΠΊΡΡΡ ΡΠ΅Π·ΠΈΡ-
ΠΊΠΈΡ
ΠΊΠΎΠ½ΡΡΡΡΠΊΠ°ΡΠ°. ΠΠΏΠΈΡΠ°Π½Π° ΡΠ΅ ΠΈ ΡΠ΅Π³ΠΎΠ²Π° ΠΈΠΌΠΏΠ»Π΅ΠΌΠ΅Π½ΡΠ°ΡΠΈΡΠ°, Ρ Π²ΠΈΠ΄Ρ ΡΠ΅ΡΠ°Π²Π°ΡΠ° SPECS, ΡΠΈΡΠΈ ΡΠ΅
ΠΊΓ΄Π΄ ΡΠ°Π²Π½ΠΎ Π΄ΠΎΡΡΡΠΏΠ°Π½. ΠΡΠ΅Π΄ΡΡΠ°Π²ΡΠ΅Π½ΠΈ ΡΡ ΡΠ΅Π·ΡΠ»ΡΠ°ΡΠΈ Π΄Π΅ΡΠ°ΡΠ½Π΅ Π΅ΠΊΡΠΏΠ΅ΡΠΈΠΌΠ΅Π½Π°ΡΠ°Π»Π½Π΅ Π΅Π²Π°Π»ΡΠ°ΡΠΈΡΠ΅
Π½Π° ΡΠ΅Π»Π΅Π²Π°Π½ΡΠ½ΠΈΠΌ ΡΠΊΡΠΏΠΎΠ²ΠΈΠΌΠ° ΠΏΡΠΈΠΌΠ΅ΡΠ° Π·Π° ΡΠ΅ΡΡΠΈΡΠ°ΡΠ΅ ΠΊΠΎΡΠΈ ΠΏΠΎΠΊΠ°Π·ΡΡΡ Π΄Π° ΡΠ΅ SPECS Π΅ΡΠΈΠΊΠ°ΡΠ°Π½,
ΠΈ Π΄Π° Ρ ΠΏΠΎΡΠ΅ΡΠ΅ΡΡ ΡΠ° ΠΎΡΡΠ°Π»ΠΈΠΌ ΡΠ°Π²ΡΠ΅ΠΌΠ΅Π½ΠΈΠΌ ΡΠ΅ΡΠ°Π²Π°ΡΠΈΠΌΠ° ΠΈΡΡΠΎΠ³ ΠΏΡΠΎΠ±Π»Π΅ΠΌΠ° Π΄Π°ΡΠ΅ ΠΏΡΠ΅ΡΠΈΠ·Π½ΠΈΡΠ΅ ΡΠ΅-
Π·ΡΠ»ΡΠ°ΡΠ΅ Ρ ΠΊΡΠ°ΡΠ΅ΠΌ Π²ΡΠ΅ΠΌΠ΅Π½Ρ, ΡΠ· Π±ΠΎΡΡ ΠΏΠΎΠΊΡΠΈΠ²Π΅Π½ΠΎΡΡ ΡΠ΅Π·ΠΈΡΠΊΠΈΡ
ΠΊΠΎΠ½ΡΡΡΡΠΊΠ°ΡΠ°. ΠΠ΅Π΄Π½Π° ΠΎΠ΄ ΠΏΡΠΈΠΌΠ΅Π½Π°
ΠΌΠΎΠ΄Π΅Π»ΠΎΠ²Π°ΡΠ° ΡΠΏΠΈΡΠ½ΠΈΡ
ΡΠ΅Π·ΠΈΠΊΠ° ΠΌΠΎΠΆΠ΅ Π±ΠΈΡΠΈ ΠΈ ΠΏΡΠΈ ΡΠ΅ΡΠ°ΠΊΡΠΎΡΠΈΡΠ°ΡΡ Π°ΠΏΠ»ΠΈΠΊΠ°ΡΠΈΡΠ° ΠΊΠΎΡΠ΅ ΠΏΡΠΈΡΡΡ-
ΠΏΠ°ΡΡ Π±Π°Π·Π°ΠΌΠ° ΠΏΠΎΠ΄Π°ΡΠ°ΠΊΠ°. Π£ ΡΠ°ΠΊΠ²ΠΈΠΌ ΡΠΈΡΡΠ°ΡΠΈΡΠ°ΠΌΠ°, Π²ΡΠ»ΠΎ ΡΡ ΡΠ΅ΡΡΠ΅ ΠΈΠ·ΠΌΠ΅Π½Π΅ ΠΊΠΎΡΠΈΠΌΠ° ΡΠ΅ ΠΌΠ΅ΡΠ°ΡΡ
ΠΈ ΡΠΏΠΈΡΠΈ ΠΈ ΠΊΓ΄Π΄ Π½Π° ΡΠ΅Π·ΠΈΠΊΡ Ρ ΠΊΠΎΠΌΠ΅ ΡΠ΅ ΠΎΠ½ΠΈ ΠΏΠΎΠ·ΠΈΠ²Π°ΡΡ. Π’Π°ΠΊΠ²Π΅ ΠΏΡΠΎΠΌΠ΅Π½Π΅ ΠΌΠΎΠ³Ρ ΡΠ°ΡΡΠ²Π°ΡΠΈ ΡΠΊΡΠΏΠ½Ρ
Π΅ΠΊΠ²ΠΈΠ²Π°Π»Π΅Π½ΡΠ½ΠΎΡΡ ΠΊΠΎΠ΄Π°, Π΄ΠΎΠΊ Π½Π° Π½ΠΈΠ²ΠΎΡ ΠΏΠΎΡΠ΅Π΄ΠΈΠ½Π°ΡΠ½ΠΈΡ
Π΄Π΅Π»ΠΎΠ²Π° Π΅ΠΊΠ²ΠΈΠ²Π°Π»Π΅Π½ΡΠ½ΠΎΡΡ Π½Π΅ ΠΌΠΎΡΠ° Π±ΠΈΡΠΈ
ΠΎΠ΄ΡΠΆΠ°Π½Π°. ΠΠΎΡΠΈΡΡΠ΅ΡΠ΅ Π°Π»Π°ΡΠ° Π·Π° Π°ΡΡΠΎΠΌΠ°ΡΡΠΊΡ Π²Π΅ΡΠΈΡΠΈΠΊΠ°ΡΠΈΡΡ Π΅ΠΊΠ²ΠΈΠ²Π°Π»Π΅Π½ΡΠ½ΠΎΡΡΠΈ ΡΠ΅ΡΠ°ΠΊΡΠΎΡΠΈ-
ΡΠ°Π½ΠΎΠ³ ΠΊΠΎΠ΄Π° ΠΌΠΎΠΆΠ΅ Π΄Π° Π΄Γ’ Π³Π°ΡΠ°Π½ΡΠΈΡΡ Π·Π°Π΄ΡΠΆΠ°Π²Π°ΡΠ° ΠΏΠΎΠ½Π°ΡΠ°ΡΠ° ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ° ΠΈ ΠΎΠ΄ ΡΡΡΡΠΈΠ½ΡΠΊΠΎΠ³ ΡΠ΅
Π·Π½Π°ΡΠ°ΡΠ° Π·Π° ΠΏΠΎΡΠ·Π΄Π°Π½ ΡΠ°Π·Π²ΠΎΡ ΡΠΎΡΡΠ²Π΅ΡΠ°. Π‘Π° ΡΠΎΠΌ ΠΌΠΎΡΠΈΠ²Π°ΡΠΈΡΠΎΠΌ, Ρ ΡΠ΅Π·ΠΈ ΡΠ΅ ΡΠ°Π·ΠΌΠ°ΡΡΠ° ΠΈ ΠΌΠΎΠ΄Π΅Π»ΠΎ-
Π²Π°ΡΠ΅ SQL ΡΠΏΠΈΡΠ° Ρ ΡΠ΅ΠΎΡΠΈΡΠ°ΠΌΠ° Π»ΠΎΠ³ΠΈΠΊΠ΅ ΠΏΡΠ²ΠΎΠ³ ΡΠ΅Π΄Π°, ΠΊΠΎΡΠΈΠΌ ΡΠ΅ ΠΎΠΌΠΎΠ³ΡΡΠ°Π²Π° Π°ΡΡΠΎΠΌΠ°ΡΡΠΊΠ° ΠΏΡΠΎΠ²Π΅ΡΠ°
Π΅ΠΊΠ²ΠΈΠ²Π°Π»Π΅Π½ΡΠ½ΠΎΡΡΠΈ C/C++ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠ° ΡΠ° ΡΠ³ΡΠ°ΡΠ΅Π½ΠΈΠΌ SQL-ΠΎΠΌ, ΡΡΠΎ ΡΠ΅ ΠΈ ΠΈΠΌΠΏΠ»Π΅ΠΌΠ΅Π½ΡΠΈΡΠ°Π½ΠΎ Ρ
Π²ΠΈΠ΄Ρ ΡΠ°Π²Π½ΠΎ Π΄ΠΎΡΡΡΠΏΠ½ΠΎΠ³ Π°Π»Π°ΡΠ° ΠΎΡΠ²ΠΎΡΠ΅Π½ΠΎΠ³ ΠΊΠΎΠ΄Π° SQLAV.The query containment problem is a very important computer science problem,
originally defined for relational queries. With the growing popularity of the SPARQL query
language, it became relevant and important in this new context, too. This thesis introduces
a new approach for solving this problem, based on a reduction to satisfiability in first order
logic. The approach covers containment under RDF SCHEMA entailment regime, and it can
deal with the subsumption relation, as a weaker form of containment. The thesis proves
soundness and completeness of the approach for a wide range of language constructs. It also
describes an implementation of the approach as an open source solver SPECS. The experimental
evaluation on relevant benchmarks shows that SPECS is efficient and comparing to
state-of-the-art solvers, it gives more precise results in a shorter amount of time, while supporting
a larger fragment of SPARQL constructs. An application of query language modeling can
be useful also along refactoring of database driven applications, where simultaneous changes
that include both a query and a host language code are very common. These changes can
preserve the overall equivalence, without preserving equivalence of these two parts considered
separately. Because of the ability to guarantee the absence of differences in behavior between
two versions of the code, tools that automatically verify code equivalence have great benefits
for reliable software development. With this motivation, a custom first-order logic modeling
of SQL queries is developed and described in the thesis. It enables an automated approach
for reasoning about equivalence of C/C++ programs with embedded SQL. The approach is
implemented within a publicly available and open source framework SQLAV
JURI SAYS:An Automatic Judgement Prediction System for the European Court of Human Rights
In this paper we present the web platform JURI SAYS that automatically predicts decisions of the European Court of Human Rights based on communicated cases, which are published by the court early in the proceedings and are often available many years before the final decision is made. Our system therefore predicts future judgements of the court. The platform is available at jurisays.com and shows the predictions compared to the actual decisions of the court. It is automatically updated every month by including the prediction for the new cases. Additionally, the system highlights the sentences and paragraphs that are most important for the prediction (i.e. violation vs. no violation of human rights)
Abstract Representation of Music: A Type-Based Knowledge Representation Framework
The wholesale efficacy of computer-based music research is contingent on the sharing and reuse of information and analysis methods amongst researchers across the constituent disciplines. However, computer systems for the analysis and manipulation of musical data are generally not interoperable. Knowledge representation has been extensively used in the domain of music to harness the benefits of formal conceptual modelling combined with logic based automated inference. However, the available knowledge representation languages lack sufficient logical expressivity to support sophisticated musicological concepts. In this thesis we present a type-based framework for abstract representation of musical knowledge. The core of the framework is a multiple-hierarchical information model called a constituent structure, which accommodates diverse kinds of musical information. The framework includes a specification logic for expressing formal descriptions of the components of the representation. We give a formal specification for the framework in the Calculus of Inductive Constructions, an expressive logical language which lends itself to the abstract specification of data types and information structures. We give an implementation of our framework using Semantic Web ontologies and JavaScript. The ontologies capture the core structural aspects of the representation, while the JavaScript tools implement the functionality of the abstract specification. We describe how our framework supports three music analysis tasks: pattern search and discovery, paradigmatic analysis and hierarchical set-class analysis, detailing how constituent structures are used to represent both the input and output of these analyses including sophisticated structural annotations. We present a simple demonstrator application, built with the JavaScript tools, which performs simple analysis and visualisation of linked data documents structured by the ontologies. We conclude with a summary of the contributions of the thesis and a discussion of the type-based approach to knowledge representation, as well as a number of avenues for future work in this area
Abstraction in ontology-based data management
In many aspects of our society there is growing awareness and consent on the need for data-driven approaches that are resilient, transparent, and fully accountable. But in order to fulfil the promises and benefits of a data-driven society, it is necessary that the data services exposed by the organisations' information systems are well-documented, and their semantics is clearly specified. Effectively documenting data services is indeed a crucial issue for organisations, not only for governing their own data, but also for interoperation purposes.
In this thesis, we propose a new approach to automatically associate formal semantic descriptions to data services, thus bringing them into compliance with the FAIR guiding principles, i.e., make data services automatically Findable, Accessible, Interoperable, and Reusable (FAIR). We base our proposal on the Ontology-based Data Management (OBDM) paradigm, where a domain ontology is used to provide a semantic layer mapped to the data sources of an organisation, thus abstracting from the technical details of the data layer implementation.
The basic idea is to characterise or explain the semantics of a given data service expressed as query over the source schema in terms of a query over the ontology. Thus, the query over the ontology represents an abstraction of the given data service in terms of the domain ontology through the mapping, and, together with the elements in the vocabulary of the ontology, such abstraction forms a basis for annotating the given data service with suitable metadata expressing its semantics.
We illustrate a formal framework for the task of automatically produce a semantic characterisation of a given data service expressed as a query over the source schema. The framework is based on three semantically well-founded notions, namely perfect, sound, and complete source-to-ontology rewriting, and on two associated basic computational problems, namely verification and computation. The former verifies whether a given query over the ontology is a perfect (respectively, sound, complete) source-to-ontology rewriting of a given data service expressed as a query over the source schema, whereas the latter computes one such rewriting, provided it exists. We provide an in-depth complexity analysis of these two computational problems in a very general scenario which uses languages amongst the most popular considered in the literature of managing data through an ontology. Furthermore, since we study also cases where the target query language for expressing source-to-ontology rewritings allows inequality atoms, we also investigate the problem of answering queries with inequalities over lightweight ontologies, a problem that has been rarely addressed. In another direction, we study and advocate the use of a non-monotonic target query language for expressing source-to-ontology rewritings. Last but not least, we outline a detailed related work, which illustrates how the results achieved in this thesis notably contributes to new results in the Semantic Web context, in the relational database theory, and in view-based query processing
Foundations of Fuzzy Logic and Semantic Web Languages
This book is the first to combine coverage of fuzzy logic and Semantic Web languages. It provides in-depth insight into fuzzy Semantic Web languages for non-fuzzy set theory and fuzzy logic experts. It also helps researchers of non-Semantic Web languages get a better understanding of the theoretical fundamentals of Semantic Web languages. The first part of the book covers all the theoretical and logical aspects of classical (two-valued) Semantic Web languages. The second part explains how to generalize these languages to cope with fuzzy set theory and fuzzy logic
Knowledge Components and Methods for Policy Propagation in Data Flows
Data-oriented systems and applications are at the centre of current developments of the World Wide Web (WWW). On the Web of Data (WoD), information sources can be accessed and processed for many purposes. Users need to be aware of any licences or terms of use, which are associated with the data sources they want to use. Conversely, publishers need support in assigning the appropriate policies alongside the data they distribute.
In this work, we tackle the problem of policy propagation in data flows - an expression that refers to the way data is consumed, manipulated and produced within processes. We pose the question of what kind of components are required, and how they can be acquired, managed, and deployed, to support users on deciding what policies propagate to the output of a data-intensive system from the ones associated with its input. We observe three scenarios: applications of the Semantic Web, workflow reuse in Open Science, and the exploitation of urban data in City Data Hubs. Starting from the analysis of Semantic Web applications, we propose a data-centric approach to semantically describe processes as data flows: the Datanode ontology, which comprises a hierarchy of the possible relations between data objects. By means of Policy Propagation Rules, it is possible to link data flow steps and policies derivable from semantic descriptions of data licences. We show how these components can be designed, how they can be effectively managed, and how to reason efficiently with them. In a second phase, the developed components are verified using a Smart City Data Hub as a case study, where we developed an end-to-end solution for policy propagation. Finally, we evaluate our approach and report on a user study aimed at assessing both the quality and the value of the proposed solution
Logic, Languages, and Rules for Web Data Extraction and Reasoning over Data
This paper gives a short overview of specific logical approaches to data extraction, data management, and reasoning about data. In particular, we survey theoretical results and formalisms that have been obtained and used in the context of the Lixto Project at TU Wien, the DIADEM project at the University of Oxford, and the VADA project, which is currently being carried out jointly by the universities of Edinburgh, Manchester, and Oxford. We start with a formal approach to web data extraction rooted in monadic second order logic and monadic Datalog, which gave rise to the Lixto data extraction system. We then present some complexity results for monadic Datalog over trees and for XPath query evaluation. We further argue that for value creation and for ontological reasoning over data, we need existential quantifiers (or Skolem terms) in rule heads, and introduce the DatalogΒ± family. We give an overview of important members of this family and discuss related complexity issues