146 research outputs found

    Automatic techniques for detecting and exploiting symmetry in model checking

    Get PDF
    The application of model checking is limited due to the state-space explosion problem – as the number of components represented by a model increase, the worst case size of the associated state-space grows exponentially. Current techniques can handle limited kinds of symmetry, e.g. full symmetry between identical components in a concurrent system. They avoid the problem of automatic symmetry detection by requiring the user to specify the presence of symmetry in a model (explicitly, or by annotating the associated specification using additional language keywords), or by restricting the input language of a model checker so that only symmetric systems can be specified. Additionally, computing unique representatives for each symmetric equivalence class is easy for these limited kinds of symmetry. We present a theoretical framework for symmetry reduction which can be applied to explicit state model checking. The framework includes techniques for automatic symmetry detection using computational group theory, which can be applied with no additional user input. These techniques detect structural symmetries induced by the topology of a concurrent system, so our framework includes exact and approximate techniques to efficiently exploit arbitrary symmetry groups which may arise in this way. These techniques are also based on computational group theoretic methods. We prove that our framework is logically sound, and demonstrate its general applicability to explicit state model checking. By providing a new symmetry reduction package for the SPIN model checker, we show that our framework can be feasibly implemented as part of a system which is widely used in both industry and academia. Through a study of SPIN users, we assess the usability of our automatic symmetry detection techniques in practice

    Data-Parallel Spreadsheet Programming

    Get PDF

    Studies in Micronesian linguistics

    Get PDF

    Evaluating Parsers with Dependency Constraints

    Get PDF
    Many syntactic parsers now score over 90% on English in-domain evaluation, but the remaining errors have been challenging to address and difficult to quantify. Standard parsing metrics provide a consistent basis for comparison between parsers, but do not illuminate what errors remain to be addressed. This thesis develops a constraint-based evaluation for dependency and Combinatory Categorial Grammar (CCG) parsers to address this deficiency. We examine the constrained and cascading impact, representing the direct and indirect effects of errors on parsing accuracy. This identifies errors that are the underlying source of problems in parses, compared to those which are a consequence of those problems. Kummerfeld et al. (2012) propose a static post-parsing analysis to categorise groups of errors into abstract classes, but this cannot account for cascading changes resulting from repairing errors, or limitations which may prevent the parser from applying a repair. In contrast, our technique is based on enforcing the presence of certain dependencies during parsing, whilst allowing the parser to choose the remainder of the analysis according to its grammar and model. We draw constraints for this process from gold-standard annotated corpora, grouping them into abstract error classes such as NP attachment, PP attachment, and clause attachment. By applying constraints from each error class in turn, we can examine how parsers respond when forced to correctly analyse each class. We show how to apply dependency constraints in three parsers: the graph-based MSTParser (McDonald and Pereira, 2006) and the transition-based ZPar (Zhang and Clark, 2011b) dependency parsers, and the C&C CCG parser (Clark and Curran, 2007b). Each is widely-used and influential in the field, and each generates some form of predicate-argument dependencies. We compare the parsers, identifying common sources of error, and differences in the distribution of errors between constrained and cascaded impact. Our work allows us to contrast the implementations of each parser, and how they respond to constraint application. Using our analysis, we experiment with new features for dependency parsing, which encode the frequency of proposed arcs in large-scale corpora derived from scanned books. These features are inspired by and extend on the work of Bansal and Klein (2011). We target these features at the most notable errors, and show how they address some, but not all of the difficult attachments across newswire and web text. CCG parsing is particularly challenging, as different derivations do not always generate different dependencies. We develop dependency hashing to address semantically redundant parses in n-best CCG parsing, and demonstrate its necessity and effectiveness. Dependency hashing substantially improves the diversity of n-best CCG parses, and improves a CCG reranker when used for creating training and test data. We show the intricacies of applying constraints to C&C, and describe instances where applying constraints causes the parser to produce a worse analysis. These results illustrate how algorithms which are relatively straightforward for constituency and dependency parsers are non-trivial to implement in CCG. This work has explored dependencies as constraints in dependency and CCG parsing. We have shown how dependency hashing can efficiently eliminate semantically redundant CCG n-best parses, and presented a new evaluation framework based on enforcing the presence of dependencies in the output of the parser. By otherwise allowing the parser to proceed as it would have, we avoid the assumptions inherent in other work. We hope this work will provide insights into the remaining errors in parsing, and target efforts to address those errors, creating better syntactic analysis for downstream applications

    A Grammar of Yélî Dnye

    Get PDF
    This is a comprehensive description of a language spoken offshore from Papua New Guinea, remarkable for its phonological, morphological and syntactic complexity. As the sole surviving member of its language family, it provides unique evidence for the kind of languages spoken in this part of the world before the Austronesian expansion. The grammar provides detailed information on phoneme inventory, morphology, syntax and select semantic fields

    Possessive classifiers in North Ambrym, a language of Vanuatu: explorations in Semantic classification

    Get PDF
    North Ambrym, an Oceanic language spoken in Vanuatu, exhibits the two common Oceanic possessive construction types: direct and indirect. This thesis focuses on the indirect construction which occurs when the possessed noun refers to a semantically alienable item. In North Ambrym the indirect possessive construction is marked by one of a set of possessive classifiers. The theory within Oceanic linguistics is that the possessive classifiers do not classify a property of the possessed noun but the relation between possessor and possessed (Lichtenberk 1983b). Thus, it is the intentional use of the possessed by the possessor that is encoded by the possessive classifier, such that an ‘edible’ classifier will be used if the possessor intends to eat the possessed or the ‘drinkable’ classifier will be used if the possessed is intended to be drunk. This thesis challenges this theory and instead proposes that the classifiers act like possessed classifiers in North Ambrym and characterise a functional property of the possessed noun. Several experiments were conducted that induced different contextual uses of possessions, however this did not result in classifier change, which would be expected in the relational classifier theory. Each classifier has a large amount of seemingly semantically disparate members and they do not all share the semantic features of the central members, thus an analysis using the classical theory of classification is untenable. Instead the classifier categories are best analysed using prototype theory as certain semantic groups of possessions are considered to be more central members. This hypothesis is supported by further experimentation into classification which helps define the centrality of classifier category members. Finally an analysis using cognitive linguistic theory proposes that non-central members are linked to central members via semantic chains using notions of metaphor and metonymy. All languge data from this project has been deposited at the Endangered Language Archive (ELAR) at SOAS,University of London

    Visionary Realism And The Emergence Of A Eudaimonistic Society: Metatheory In A Time Of Metacrisis

    Get PDF
    This thesis aims to support the conditions for the emergence of a eudaimonistic, freeflourishing planetary society by helping ignite the potentials of metatheory as a transformational cultural force vis-à-vis our complex twenty-first century challenges. I argue that metatheory in its appropriate form provides indispensable intellectual scaffolding for the crucial psycho-spiritual, cultural, and social transformations demanded by these interconnected global challenges, or what I call the metacrisis. I advance these aims, first, by reflection on the nature, role, and function of metatheory in geo-historical context, articulating a vision for the revindication of metatheory as integrative metatheory 2.0; and, second, the development of the contours of a particular metatheory through an exploratory-dialogical encounter between what are arguably amongst the most comprehensive and sophisticated integrative metatheories arising in the wake of postmodernism: namely, critical realism, founded by Roy Bhaskar (1944–2014), and integral theory, founded by Ken Wilber (1949–). Thus, in this thesis, I deploy the methodology of hermeneutical dialectics and the method of immanent critique to forge a non-preservative synthesis of aspects of these two metatheories into a new metatheory—a visionary realism—that might help us to better understand and wisely respond to the metacrisis. I then apply this visionary realist framework to sketch the contours of the metacrisis at large, analyzing and synthesizing the philosophical, cultural, and psychological aspects of the metacrisis to identify key principles and holistic solution patterns that may inform deliberate social transformation

    Lexical database enrichment through semi-automated morphological analysis

    Get PDF
    Derivational morphology proposes meaningful connections between words and is largely unrepresented in lexical databases. This thesis presents a project to enrich a lexical database with morphological links and to evaluate their contribution to disambiguation. A lexical database with sense distinctions was required. WordNet was chosen because of its free availability and widespread use. Its suitability was assessed through critical evaluation with respect to specifications and criticisms, using a transparent, extensible model. The identification of serious shortcomings suggested a portable enrichment methodology, applicable to alternative resources. Although 40% of the most frequent words are prepositions, they have been largely ignored by computational linguists, so addition of prepositions was also required. The preferred approach to morphological enrichment was to infer relations from phenomena discovered algorithmically. Both existing databases and existing algorithms can capture regular morphological relations, but cannot capture exceptions correctly; neither of them provide any semantic information. Some morphological analysis algorithms are subject to the fallacy that morphological analysis can be performed simply by segmentation. Morphological rules, grounded in observation and etymology, govern associations between and attachment of suffixes and contribute to defining the meaning of morphological relationships. Specifying character substitutions circumvents the segmentation fallacy. Morphological rules are prone to undergeneration, minimised through a variable lexical validity requirement, and overgeneration, minimised by rule reformulation and restricting monosyllabic output. Rules take into account the morphology of ancestor languages through co-occurrences of morphological patterns. Multiple rules applicable to an input suffix need their precedence established. The resistance of prefixations to segmentation has been addressed by identifying linking vowel exceptions and irregular prefixes. The automatic affix discovery algorithm applies heuristics to identify meaningful affixes and is combined with morphological rules into a hybrid model, fed only with empirical data, collected without supervision. Further algorithms apply the rules optimally to automatically pre-identified suffixes and break words into their component morphemes. To handle exceptions, stoplists were created in response to initial errors and fed back into the model through iterative development, leading to 100% precision, contestable only on lexicographic criteria. Stoplist length is minimised by special treatment of monosyllables and reformulation of rules. 96% of words and phrases are analysed. 218,802 directed derivational links have been encoded in the lexicon rather than the wordnet component of the model because the lexicon provides the optimal clustering of word senses. Both links and analyser are portable to an alternative lexicon. The evaluation uses the extended gloss overlaps disambiguation algorithm. The enriched model outperformed WordNet in terms of recall without loss of precision. Failure of all experiments to outperform disambiguation by frequency reflects on WordNet sense distinctions
    • …
    corecore