353 research outputs found

    ‘The Gradeability of Causative Events’: A Combined Corpus-based and Dictionary-based Study of Middle English -isen Simplex Copies

    Get PDF
    Causativity is one of the most extensively studied operations in linguistics. No matter whether on a morphological, phonological, semantic, or syntactic level, there seems to be nothing that has not already been explored about this notion (cf. Beavers & KoontzGarboden, 2020; Givón, 1975; Kemmer & Verhagen, 1994; Levin & Rappaport Hovav, 1995; Martin & Schäfer, 2014). The current study demonstrates that further insights into causativity and the semantics of English causative verbs can be gained by traveling back in time into the morphological history of Middle English. Causativity and the causativizing properties of verbal affixes are not comprehensively explored concerning previous stages of English (Dalton-Puffer, 1996; van Gelderen, 2018). This study investigates Middle English -isen simplex copies, which came to English through the language contact situation with Anglo-Norman (Dalton-Puffer, 1996, p. 201). For this purpose, a combined corpus-based and dictionary-based investigation is carried out using three Middle English corpora. The concept of causativity is broken down into its component parts to investigate causative -isen simplex copies with the help of a classification schema that manifests three parameters of causativity. As a result of this investigation, the -isen simplex copies are classified into seven causative subclasses. In addition, an event semantic analysis based on Piñón (2001a, 2001b) and Pizzolante (2017) allows for identifying fine-grained differences between different types of causative events. In this regard, it is demonstrated that causative events denote not only “varying degrees of causativity” but manifest different degrees of complexity on an event semantic level. This study does not only provide further insights into the extensively explored notion of causativity but must, at the same time, be considered as one of the long-awaited stories about the morphological history of English

    Acta Cybernetica : Volume 18. Number 2.

    Get PDF

    IDEF3 formalization report

    Get PDF
    The Process Description Capture Method (IDEF3) is one of several Integrated Computer-Aided Manufacturing (ICAM) DEFinition methods developed by the Air Force to support systems engineering activities, and in particular, to support information systems development. These methods have evolved as a distillation of 'good practice' experience by information system developers and are designed to raise the performance level of the novice practitioner to one comparable with that of an expert. IDEF3 is meant to serve as a knowledge acquisition and requirements definition tool that structures the user's understanding of how a given process, event, or system works around process descriptions. A special purpose graphical language accompanying the method serves to highlight temporal precedence and causality relationships relative to the process or event being described

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B

    Default reasoning using maximum entropy and variable strength defaults

    Get PDF
    PhDThe thesis presents a computational model for reasoning with partial information which uses default rules or information about what normally happens. The idea is to provide a means of filling the gaps in an incomplete world view with the most plausible assumptions while allowing for the retraction of conclusions should they subsequently turn out to be incorrect. The model can be used both to reason from a given knowledge base of default rules, and to aid in the construction of such knowledge bases by allowing their designer to compare the consequences of his design with his own default assumptions. The conclusions supported by the proposed model are justified by the use of a probabilistic semantics for default rules in conjunction with the application of a rational means of inference from incomplete knowledge the principle of maximum entropy (ME). The thesis develops both the theory and algorithms for the ME approach and argues that it should be considered as a general theory of default reasoning. The argument supporting the thesis has two main threads. Firstly, the ME approach is tested on the benchmark examples required of nonmonotonic behaviour, and it is found to handle them appropriately. Moreover, these patterns of commonsense reasoning emerge as consequences of the chosen semantics rather than being design features. It is argued that this makes the ME approach more objective, and its conclusions more justifiable, than other default systems. Secondly, the ME approach is compared with two existing systems: the lexicographic approach (LEX) and system Z+. It is shown that the former can be equated with ME under suitable conditions making it strictly less expressive, while the latter is too crude to perform the subtle resolution of default conflict which the ME approach allows. Finally, a program called DRS is described which implements all systems discussed in the thesis and provides a tool for testing their behaviours.Engineering and Physical Science Research Council (EPSRC

    An Abstract Interpretation Framework for Diagnosis and Verification of Timed Concurrent Constraint Languages

    Get PDF
    In this thesis, we propose a semantic framework for tccp based on abstract interpretation with the main purpose of formally verifying and debugging tccp programs. A key point for the efficacy of the resulting methodologies is the adequacy of the concrete semantics. Thus, in this thesis, much effort has been devoted to the development of a suitable small-step denotational semantics for the tccp language to start with. Our denotational semantics models precisely the small-step behavior of tccp and is suitable to be used within the abstract interpretation framework. Namely, it is defined in a compositional and bottom-up way, it is as condensed as possible (it does not contain redundant elements), and it is goal-independent (its calculus does not depend on the semantic evaluation of a specific initial agent). Another contribution of this thesis is the definition (by abstraction of our small-step denotational semantics) of a big-step denotational semantics that abstracts away from the information about the evolution of the state and keeps only the the first and the last (if it exists) state. We show that this big-step semantics is essentially equivalent to the input-output semantics. In order to fulfill our goal of formally validate tccp programs, we build different approximations of our small-step denotational semantics by using standard abstract interpretation techniques. In this way we obtain debugging and verification tools which are correct by construction. More specifically, we propose two abstract semantics that are used to formally debug tccp programs. The first one approximates the information content of tccp behavioral traces, while the second one approximates our small-step semantics with temporal logic formulas. By applying abstract diagnosis with these abstract semantics we obtain two fully-automatic verification methods for tccp

    Proceedings of the First NASA Formal Methods Symposium

    Get PDF
    Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000

    Should Code Be Law?: Smart Contracts, Blockchain, and Boilerplate

    Get PDF
    Smart contracts...guarantee a very specific set of outcomes. There\u27s never any confusion and there\u27s never any need for litigation. ~JeffGarzik If the blockchain promise comes to a reality...most goods, labor and capital will be allocated through decentralized global platforms. Disputes will certainly arise. ~ Clément Lesaege and Federico Ast Blockchain-based smart contracts may characterize much of the future of exchange as they expand the scope of potentially efficient bargains through restructuring and reducing transaction costs relative to traditional contracts. This Article analyzes the changes in transaction costs and execution efficiencies as contractual distance -the number of intermediaries required to make an exchange, weighted by the rational level of actual agreement between parties-increases between bespoke contracts, template contracts, contracts of adhesion, and algorithmic contracts housed on platforms like Ethereum and arbitrated on platforms such as Kleros. This framework shows that smart contracts have the potential to lower the contractual distance required to make an exchange by (1) overcoming trust issues that require intermediaries, (2) lowering the incentive to write certain kinds of boilerplate, and (3) increasing the incentive to understand contractual terms. As a result, wide implementation of smart contracts may return contract law closer to the legal ideal of mutual understanding as the basis for exchange. At the same time, these auto-executing agreements risk making the future of contract law a return to the era of sealed instruments, enforcing themselves regardless of impossibility, fraud, and other legal safeguards. As examples of these costs and benefits, the Article focuses on smart contracts in two industries: the environmental public goods sector and the film industry. These industries illustrate the potential for smart contracts as well as steps that can be taken to ensure that as code becomes law, it will retain the doctrinal wisdom applied to contracts before they became smart

    Comprehending Each Other: Weak Reciprocity and Processing

    Get PDF
    This dissertation looks at the question of how comprehenders get from an underspecified semantic representation to a particular construal. Its focus is on reciprocal sentences. Reciprocal sentences, like other plural sentences, are open to a range of interpretations. Work on the semantics of plural predication commonly assumes that this range of interpretations is due to cumulativity (Krifka 1992): if predicates are inherently cumulative (Kratzer 2001), the logical representations of plural sentences underspecify the interpretation (rather than being ambiguous between various interpretations). The dissertation argues that the processor makes use of a number of general preferences and principles in getting from such underspecified semantic representations to particular construals: principles of economy in mental representation, including a preference for uniformity, and principles of natural grouping. It sees no need for the processor to make use of a principle like the Strongest Meaning Hypothesis (Dalrymple et al. 1998) in comprehending reciprocal sentences. Instead, they are associated with cumulative semantic representations with truth conditions equivalent to Weak Reciprocity (Langendoen 1978), as in Dotlačil (2010). Interpretations weaker than Weak Reciprocity (‘chain interpretations’) arise via a process of pragmatic weakening. Interpretations stronger than Weak Reciprocity may arise in different ways. Statives are seen as having special requirements regarding the naturalness or ‘substantivity’ of pluralities (Kratzer 2001), and this leads to stronger readings. In other cases, strong interpretations are favoured by a preference for uniformity, which is taken to be a type of economy preference. It is assumed that the processor need not commit to a fully spelled out construal, but may build mental models of discourse that themselves underspecify the relations that hold among individuals. While the dissertation’s focus is on reciprocal sentences, the same principles and preferences are argued to be involved in comprehending other plural sentences

    Artificial intelligence in space

    Get PDF
    In the next coming years, space activities are expected to undergo a radical transformation with the emergence of new satellite systems or new services which will incorporate the contributions of artificial intelligence and machine learning defined as covering a wide range of innovations from autonomous objects with their own decision-making power to increasingly sophisticated services exploiting very large volumes of information from space. This chapter identifies some of the legal and ethical challenges linked to its use. These legal and ethical challenges call for solutions which the international treaties in force are not sufficient to determine and implement. For this reason, a legal methodology must be developed that makes it possible to link intelligent systems and services to a system of rules applicable thereto. It discusses existing legal AI-based tools amenable for making space law actionable, interoperable and machine readable for future compliance tools.Comment: 32 page
    corecore