75 research outputs found

    The Forgotten Homerist: Reassessing William Ewart Gladstone's Role in the Victorian Reception of Homer (1872-1884)

    Get PDF
    This thesis uses William Gladstone’s Homeric research to reassess the relationship between nineteenth-century Britain and the ancient past. Gladstone (1809-1898), who served as Prime Minister four times during the Victorian period, has often been dismissed as a scholar of Homer: too enthusiastic, too much of a dilettante, too ready to cast aside evidence. But, through a careful examination of unstudied archival evidence, it is possible to build a very different picture. During the 1870s, Gladstone embarks on a Homeric campaign which changes his contemporaries’ understanding of time and history. By carefully exploiting recent archaeological discoveries – particularly in the case of Schliemann’s discovery of Troy – Gladstone works to bring both Homer and Troy out of the world of myth and into that of history. As this thesis will demonstrate, for many Victorians, Gladstone, not Schliemann, brought Homer’s Troy to light, in the ruins of Hissarlik. Working behind the scenes, over the course of many years, Gladstone revolutionises his contemporaries’ understanding of the study of Homer. He pioneers the study of what he calls ‘Homerology’: a new approach to the poems. Gladstone’s Homerology sees the epics as vital sources for the scientific investigation of the ancient past. Gladstone presents Victorian Britain with a new model of time and history, where myth becomes a historical reality. Consequently, for Gladstone, it is the Homerist who, above all, has the right to write about the ancient past of man. Through a series of case studies - which have been unnoticed or unrecognised by previous scholarship, this thesis demonstrates that Gladstone’s Homer shaped many key Victorian discourses about the earliest history of mankind: from archaeology to evolution. In so doing, it makes the case for a granular, archive-driven methodology for classical reception, one which is equipped to capture the nuances, complications, and complexities of relationships with the ancient past

    Integrative Levels of Knowing

    Get PDF
    Diese Dissertation beschäftigt sich mit einer systematischen Organisation der epistemologischen Dimension des menschlichen Wissens in Bezug auf Perspektiven und Methoden. Insbesondere wird untersucht inwieweit das bekannte Organisationsprinzip der integrativen Ebenen, das eine Hierarchie zunehmender Komplexität und Integration beschreibt, geeignet ist für eine grundlegende Klassifikation von Perspektiven bzw. epistemischen Bezugsrahmen. Die zentrale These dieser Dissertation geht davon aus, dass eine angemessene Analyse solcher epistemischen Kontexte in der Lage sein sollte, unterschiedliche oder gar konfligierende Bezugsrahmen anhand von kontextübergreifenden Standards und Kriterien vergleichen und bewerten zu können. Diese Aufgabe erfordert theoretische und methodologische Grundlagen, welche die Beschränkungen eines radikalen Kontextualismus vermeiden, insbesondere die ihm innewohnende Gefahr einer Fragmentierung des Wissens aufgrund der angeblichen Inkommensurabilität epistemischer Kontexte. Basierend auf Jürgen Habermas‘ Theorie des kommunikativen Handelns und seiner Methodologie des hermeneutischen Rekonstruktionismus, wird argumentiert, dass epistemischer Pluralismus nicht zwangsläufig zu epistemischem Relativismus führen muss und dass eine systematische Organisation der Perspektivenvielfalt von bereits existierenden Modellen zur kognitiven Entwicklung profitieren kann, wie sie etwa in der Psychologie oder den Sozial- und Kulturwissenschaften rekonstruiert werden. Der vorgestellte Ansatz versteht sich als ein Beitrag zur multi-perspektivischen Wissensorganisation, der sowohl neue analytische Werkzeuge für kulturvergleichende Betrachtungen von Wissensorganisationssystemen bereitstellt als auch neue Organisationsprinzipien vorstellt für eine Kontexterschließung, die dazu beitragen kann die Ausdrucksstärke bereits vorhandener Dokumentationssprachen zu erhöhen. Zudem enthält der Anhang eine umfangreiche Zusammenstellung von Modellen integrativer Wissensebenen.This dissertation is concerned with a systematic organization of the epistemological dimension of human knowledge in terms of viewpoints and methods. In particular, it will be explored to what extent the well-known organizing principle of integrative levels that presents a developmental hierarchy of complexity and integration can be applied for a basic classification of viewpoints or epistemic outlooks. The central thesis pursued in this investigation is that an adequate analysis of such epistemic contexts requires tools that allow to compare and evaluate divergent or even conflicting frames of reference according to context-transcending standards and criteria. This task demands a theoretical and methodological foundation that avoids the limitation of radical contextualism and its inherent threat of a fragmentation of knowledge due to the alleged incommensurability of the underlying frames of reference. Based on Jürgen Habermas’s Theory of Communicative Action and his methodology of hermeneutic reconstructionism, it will be argued that epistemic pluralism does not necessarily imply epistemic relativism and that a systematic organization of the multiplicity of perspectives can benefit from already existing models of cognitive development as reconstructed in research fields like psychology, social sciences, and humanities. The proposed cognitive-developmental approach to knowledge organization aims to contribute to a multi-perspective knowledge organization by offering both analytical tools for cross-cultural comparisons of knowledge organization systems (e.g., Seven Epitomes and Dewey Decimal Classification) and organizing principles for context representation that help to improve the expressiveness of existing documentary languages (e.g., Integrative Levels Classification). Additionally, the appendix includes an extensive compilation of conceptions and models of Integrative Levels of Knowing from a broad multidisciplinary field

    Resolving XML Semantic Ambiguity

    Get PDF
    ABSTRACT XML semantic-aware processing has become a motivating and important challenge in Web data management, data processing, and information retrieval. While XML data is semi-structured, yet it remains prone to lexical ambiguity, and thus requires dedicated semantic analysis and sense disambiguation processes to assign well-defined meaning to XML elements and attributes. This becomes crucial in an array of applications ranging over semantic-aware query rewriting, semantic document clustering and classification, schema matching, as well as blog analysis and event detection in social networks and tweets. Most existing approaches in this context: i) ignore the problem of identifying ambiguous XML nodes, ii) only partially consider their structural relations/context, iii) use syntactic information in processing XML data regardless of the semantics involved, and iv) are static in adopting fixed disambiguation constraints thus limiting user involvement. In this paper, we provide a new XML Semantic Disambiguation Framework titled XSDF designed to address each of the above motivations, taking as input: an XML document and a general purpose semantic network, and then producing as output a semantically augmented XML tree made of unambiguous semantic concepts. Experiments demonstrate the effectiveness of our approach in comparison with alternative methods. Categories and Subject Descriptors General Terms Algorithms, Measurement, Performance, Design, Experimentation. Keywords XML semantic-aware processing, a m b i g u i t y d e g r e e , s p h e r e neighborhood, XML context vector, semantic network, semantic disambiguation

    Acts of killing, acts of meaning:an application of corpus pattern analysis to language of animal-killing

    Get PDF
    We are currently witnessing unprecedented levels of ecological destruction and violence visited upon nonhumans. Study of the more-than-human world is now being enthusiastically taken up across a range of disciplines, in what has been called the ‘scholarly animal turn’. This thesis brings together concerns of Critical Animal Studies – along with related threads of posthumanism and new materialist thinking – and Corpus Linguistics, specifically Corpus Pattern Analysis (CPA), to produce a data-driven, lexicocentric study of the discourse of animal-killing. CPA, which has been employed predominantly in corpus lexicography, provides a robust and empirically well-founded basis for the analysis of verbs. Verbs are chosen as they act as the pivot of a clause; analysing them also uncovers their arguments – in this case, participants in material-discursive ‘killing’ events. This project analyses 15 ‘killing’ verbs using CPA as a basis, in what I term a corpus-lexicographical discourse analysis. The data is sampled from an animal-themed corpus of around 9 million words of contemporary British English, and the British National Corpus is used for reference. The findings are both methodological and substantive. CPA is found to be a reliable empirical starting point for discourse analysis, and the lexicographical practice of establishing linguistic ‘norms’ is critical to the identification of anomalous uses. The thesis presents evidence of anthropocentrism inherent in the English lexicon, and demonstrates several ways in which distance is created between participants of ‘killing’ constructions. The analysis also reveals specific ways that verbs can obfuscate, deontologise and deindividualise their arguments. The recommendations, for discourse analysts, include the adoption of CPA and a critical analysis of its resulting patterns in order to demonstrate the precise mechanisms by which verb use can either oppress or empower individuals. Social justice advocates are also alerted to potentially harmful language that might undermine their cause

    Comparing two thesaurus representations for Russian

    Get PDF
    © 2018 Global WordNet Association. All Rights Reserved. In the paper we presented a new Russian wordnet, RuWordNet, which was semi-automatically obtained by transformation of the existing Russian thesaurus RuThes. At the first step, the basic structure of wordnets was reproduced: synsets’ hierarchy for each part of speech and the basic set of relations between synsets (hyponym-hypernym, part-whole, antonyms). At the second stage, we added causation, entailment and domain relations between synsets. Also derivation relations were established for single words and the component structure for phrases included in RuWordNet. The described procedure of transformation highlights the specific features of each type of thesaurus representations

    Creating large semantic lexical resources for the Finnish language

    Get PDF
    Finnish belongs into the Finno-Ugric language family, and it is spoken by the vast majority of the people living in Finland. The motivation for this thesis is to contribute to the development of a semantic tagger for Finnish. This tool is a parallel of the English Semantic Tagger which has been developed at the University Centre for Computer Corpus Research on Language (UCREL) at Lancaster University since the beginning of the 1990s and which has over the years proven to be a very powerful tool in automatic semantic analysis of English spoken and written data. The English Semantic Tagger has various successful applications in the fields of natural language processing and corpus linguistics, and new application areas emerge all the time. The semantic lexical resources that I have created in this thesis provide the knowledge base for the Finnish Semantic Tagger. My main contributions are the lexical resources themselves, along with a set of methods and guidelines for their creation and expansion as a general language resource and as tailored for domain-specific applications. Furthermore, I propose and carry out several methods for evaluating semantic lexical resources. In addition to the English Semantic Tagger, which was developed first, and the Finnish Semantic Tagger second, equivalent semantic taggers have now been developed for Czech, Chinese, Dutch, French, Italian, Malay, Portuguese, Russian, Spanish, Urdu, and Welsh. All these semantic taggers taken together form a program framework called the UCREL Semantic Analysis System (USAS) which enables the development of not only monolingual but also various types of multilingual applications. Large-scale semantic lexical resources designed for Finnish using semantic fields as the organizing principle have not been attempted previously. Thus, the Finnish semantic lexicons created in this thesis are a unique and novel resource. The lexical coverage on the test corpora containing general modern standard Finnish, which has been the focus of the lexicon development, ranges from 94.58% to 97.91%. However, the results are also very promising in the analysis of domain-specific text (95.36%), older Finnish text (92.11–93.05%), and Internet discussions (91.97–94.14%). The results of the evaluation of lexical coverage are comparable to the results obtained with the English equivalents and thus indicate that the Finnish semantic lexical resources indeed cover the majority of core Finnish vocabulary

    Social work with airports passengers

    Get PDF
    Social work at the airport is in to offer to passengers social services. The main methodological position is that people are under stress, which characterized by a particular set of characteristics in appearance and behavior. In such circumstances passenger attracts in his actions some attention. Only person whom he trusts can help him with the documents or psychologically

    Porcine Spine Finite Element Model of Progressive Experimental Scoliosis and Assessment of a New Dual-Epiphyseal Growth Modulating Implant

    Get PDF
    RÉSUMÉ La scoliose est une déformation tridimensionnelle de la colonne vertébrale dont l’étiologie reste encore à élucider. Il est généralement admis que la progression de la déformation scoliotique pédiatrique est liée au principe d’Hueter-Volkmann qui stipule une réduction de la croissance suite à des contraintes en compression excessives au niveau de la concavité de la courbure scoliotique vs. sa convexité. Les stratégies de traitement des courbures sont difficiles, surtout chez les jeunes enfants. Typiquement, une intervention chirurgicale avec une instrumentation rachidienne accompagnée d’une arthrodèse segmentaire est nécessaire pour des courbures progressant au-delà de 40° d’angle de Cobb. De nouveaux dispositifs visent à manipuler la croissance vertébrale en exploitant le principe d’Hueter-Volkmann pour contrôler la progression de et corriger la courbure. Ces implants sans fusion exploitent la croissance vertébrale résiduelle en manipulant des gradients de croissance pour localement inverser la cunéiformisation vertébrale et, au fil du temps, réaligner la colonne vertébrale globalement. Des essais cliniques ont démontré une correction prometteuse pour les courbures généralement inférieures à 45°; cependant, les dispositifs actuels chevauchent l’espace du disque intervertébral et le compriment augmentant les risques de dégénérescence du disque à long terme. Par ailleurs, les implants nouvellement conçus sont généralement testés en utilisant des modèles animaux équivalents pour évaluer leur efficacité à corriger des déformations par l'intermédiaire de l’approche inverse (création d'une déformation) ou l’approche à 2- étapes (création d'une déformation suivie d’une correction). Néanmoins, une plate-forme de conception efficace est nécessaire pour évaluer la manipulation de la croissance à court et long termes par de nouveaux implants et de raccourcir le transfert de connaissances vers des applications cliniques. L’objectif général de cette thèse était de développer et de vérifier un modèle par éléments finis porcin (MEFp) unique en tant qu’une plateforme alternative pour la simulation de scolioses expérimentales progressives et des implants sans fusion, et d’évaluer un nouvel implant double-épiphysaire local ne chevauchant pas l’espace du disque sur des porcs immatures. Ainsi, les objectifs spécifiques suivants ont été complétés : 1) développer et----------ABSTRACT Scoliosis is a complex three-dimensional deformity of the spine whose etiology is yet to be elucidated. The pathomechanism of scoliosis progression is believed to be linked to the Hueter-Volkmann principle, by which growth is reduced due to increased growth plate compression, with the inverse also valid. Treatment strategies are challenging, especially in young children. Curves progressing beyond 40° Cobb angle are typically treated via invasive surgical interventions requiring spinal instrumentation accompanied by segmental spinal arthrodesis, impairing spinal mobility. New devices aim at manipulating vertebral growth by exploiting the Hueter-Volkmann principle to control curvature progression. These fusionless implants harness remaining vertebral growth by manipulating growth gradients to reverse vertebral wedging locally and, over time, globally realign the spine. Clinical trials have demonstrated promising deformity correction for curves generally below 45°; however, current devices bridge the intervertebral disc gap and predominantly compress the disc increasing the risks of longterm disc degeneration. Moreover, in a time-consuming manner, newly designed implants are commonly tested using equivalent animal models to assess their efficacy in correcting spinal deformities via the inverse (creation of a deformity) or the 2-step approaches (creation of a deformity followed by its subsequent correction). Nevertheless, a solid design platform is required to evaluate the short- and long-term growth manipulating efficacy of new implant designs and shorten knowledge transfer to clinical applications. The general objective of this thesis was to develop and verify a unique porcine spine finite element model (pFEM) as an alternative testing platform for the simulation of progressive experimental scoliosis and fusionless implants, and assess a new localized dualepiphyseal implant on immature pigs. Thus, specific objectives were devised as follows: 1) develop and verify a distinctive pFEM of the spine and ribcage, 2) develop and test, in vivo, a dual-epiphyseal implant incorporating a custom expansion mechanism, 3) exploit the developed pFEM to investigate differences between the inverse and 2-step fusionless implant testing approaches, and 4) exploit the pFEM to evaluate the biomechanical contribution of the ribcage in fusionless scoliosis surgery

    The development of a framework for semantic similarity measures for the Arabic language

    Get PDF
    This thesis presents a novel framework for developing an Arabic Short Text Semantic Similarity (STSS) measure, namely that of NasTa. STSS measures are developed for short texts of 10 -25 words long. The algorithm calculates the STSS based on Part of Speech (POS), Arabic Word Sense Disambiguation (WSD), semantic nets and corpus statistics. The proposed framework is founded on word similarity measures. Firstly, a novel Arabic noun similarity measure is created using information sources extracted from a lexical database known as Arabic WordNet. Secondly, a novel verb similarity algorithm is created based on the assumption that words sharing a common root usually have a related meaning which is a central characteristic of Arabic language. Two Arabic word benchmark datasets, noun and verb are created to evaluate them. These are the first of their kinds for Arabic. Their creation methodologies use the best available experimental techniques to create materials and collect human ratings from representative samples of the Arabic speaking population. Experimental evaluation indicates that the Arabic noun and the Arabic verb measures performed well and achieved good correlations comparison with the average human performance on the noun and verb benchmark datasets respectively. Specific features of the Arabic language are addressed. A new Arabic WSD algorithm is created to address the challenge of ambiguity caused by missing diacritics in the contemporary Arabic writing system. The algorithm disambiguates all words (nouns and verbs) in the Arabic short texts without requiring any manual training data. Moreover, a novel algorithm is presented to identify the similarity score between two words belonging to different POS, either a pair comprising a noun and verb or a verb and noun. This algorithm is developed to perform Arabic WSD based on the concept of noun semantic similarity. Important benchmark datasets for text similarity are presented: ASTSS-68 and ASTSS-21. Experimental results indicate that the performance of the Arabic STSS algorithm achieved a good correlation comparison with the average human performance on ASTSS-68 which was statistically significant
    corecore