465 research outputs found
Fuzzy Sets and Formal Logics
The paper discusses the relationship between fuzzy sets and formal logics as well as the influences fuzzy set theory had on the development of particular formal logics. Our focus is on the historical side of these developments. © 2015 Elsevier B.V. All rights reserved.partial support by the Spanish projects EdeTRI (TIN2012-39348- C02-01) and 2014 SGR 118.Peer reviewe
A Calculus for Orchestration of Web Services
Service-oriented computing, an emerging paradigm for distributed computing based on the use of services, is calling for the development of tools and techniques to build safe and trustworthy systems, and to analyse their behaviour. Therefore, many researchers have proposed to use process calculi, a cornerstone of current foundational research on specification and analysis of concurrent, reactive, and distributed systems. In this paper, we follow this approach and introduce CWS, a process calculus expressly designed for specifying and combining service-oriented applications, while modelling their dynamic behaviour. We show that CWS can model all the phases of the life cycle of service-oriented applications, such as publication, discovery, negotiation, orchestration, deployment, reconfiguration and execution. We illustrate the specification style that CWS supports by means of a large case study from the automotive domain and a number of more specific examples drawn from it
Constraint tableaux for two-dimensional fuzzy logics
We introduce two-dimensional logics based on \L{}ukasiewicz and G\"{o}del
logics to formalize paraconsistent fuzzy reasoning. The logics are interpreted
on matrices, where the common underlying structure is the bi-lattice (twisted)
product of the interval. The first (resp.\ second) coordinate encodes
the positive (resp.\ negative) information one has about a statement. We
propose constraint tableaux that provide a modular framework to address their
completeness and complexity
Zero-one laws with respect to models of provability logic and two Grzegorczyk logics
It has been shown in the late 1960s that each formula of first-order logic without constants and function symbols obeys a zero-one law: As the number of elements of finite models increases, every formula holds either in almost all or in almost no models of that size. Therefore, many properties of models, such as having an even number of elements, cannot be expressed in the language of first-order logic. Halpern and Kapron proved zero-one laws for classes of models corresponding to the modal logics K, T, S4, and S5 and for frames corresponding to S4 and S5. In this paper, we prove zero-one laws for provability logic and its two siblings Grzegorczyk logic and weak Grzegorczyk logic, with respect to model validity. Moreover, we axiomatize validity in almost all relevant finite models, leading to three different axiom systems
Software Plagiarism Detection Using N-grams
Plagiarism is an act of copying where one doesn’t rightfully credit the original source. The
motivations behind plagiarism can vary from completing academic courses to even gaining
economical advantage. Plagiarism exists in various domains, where people want to take credit
from something they have worked on. These areas can include e.g. literature, art or software,
which all have a meaning for an authorship.
In this thesis we conduct a systematic literature review from the topic of source code
plagiarism detection methods, then based on the literature propose a new approach to detect
plagiarism which combines both similarity detection and authorship identification, introduce
our tokenization method for the source code, and lastly evaluate the model by using real life
data sets. The goal for our model is to point out possible plagiarism from a collection of
documents, which in this thesis is specified as a collection of source code files written by various
authors. Our data, which we will use to our statistical methods, consists of three datasets:
(1) collection of documents belonging to University of Helsinki’s first programming course, (2)
collection of documents belonging to University of Helsinki’s advanced programming course
and (3) submissions for source code re-use competition. Statistical methods in this thesis are
inspired by the theory of search engines, which are related to data mining when detecting
similarity between documents and machine learning when classifying document with the most
likely author in authorship identification.
Results show that our similarity detection model can be used successfully to retrieve
documents for further plagiarism inspection, but false positives are quickly introduced even
when using a high threshold that controls the minimum allowed level of similarity between
documents. We were unable to use the results of authorship identification in our study, as
the results with our machine learning model were not high enough to be used sensibly. This
was possibly caused by the high similarity between documents, which is due to the restricted
tasks and the course setting that teaches a specific programming style during the timespan of
the course
- …