11 research outputs found

    Durham - a word sense disambiguation system

    Get PDF
    Ever since the 1950's when Machine Translation first began to be developed, word sense disambiguation (WSD) has been considered a problem to developers. In more recent times, all NLP tasks which are sensitive to lexical semantics potentially benefit from WSD although to what extent is largely unknown. The thesis presents a novel approach to the task of WSD on a large scale. In particular a novel knowledge source is presented named contextual information. This knowledge source adopts a sub-symbolic training mechanism to learn information from the context of a sentence which is able to aid disambiguation. The system also takes advantage of frequency information and these two knowledge sources are combined. The system is trained and tested on SEMCOR. A novel disambiguation algorithm is also developed. The algorithm must tackle the problem of a large possible number of sense combinations in a sentence. The algorithm presented aims to make an appropriate choice between accuracy and efficiency. This is performed by directing the search at a word level. The performance achieved on SEMCOR is reported and an analysis of the various components of the system is performed. The results achieved on this test data are pleasing, but are difficult to compare with most of the other work carried out in the field. For this reason the system took part in the SENSEVAL evaluation which provided an excellent opportunity to extensively compare WSD systems. SENSEVAL is a small scale WSD evaluation using the HECTOR lexicon. Despite this, few adaptations to the system were required. The performance of the system on the SENSEVAL task are reported and have also been presented in [Hawkins, 2000]

    Automated mood boards - Ontology-based semantic image retrieval

    Get PDF
    The main goal of this research is to support concept designers’ search for inspirational and meaningful images in developing mood boards. Finding the right images has become a well-known challenge as the amount of images stored and shared on the Internet and elsewhere keeps increasing steadily and rapidly. The development of image retrieval technologies, which collect, store and pre-process image information to return relevant images instantly in response to users’ needs, have achieved great progress in the last decade. However, the keyword-based content description and query processing techniques for Image Retrieval (IR) currently used have their limitations. Most of these techniques are adapted from the Information Retrieval research, and therefore provide limited capabilities to grasp and exploit conceptualisations due to their inability to handle ambiguity, synonymy, and semantic constraints. Conceptual search (i.e. searching by meaning rather than literal strings) aims to solve the limitations of the keyword-based models. Starting from this point, this thesis investigates the existing IR models, which are oriented to the exploitation of domain knowledge in support of semantic search capabilities, with a focus on the use of lexical ontologies to improve the semantic perspective. It introduces a technique for extracting semantic DNA (SDNA) from textual image annotations and constructing semantic image signatures. The semantic signatures are called semantic chromosomes; they contain semantic information related to the images. Central to the method of constructing semantic signatures is the concept disambiguation technique developed, which identifies the most relevant SDNA by measuring the semantic importance of each word/phrase in the image annotation. In addition, a conceptual model of an ontology-based system for generating visual mood boards is proposed. The proposed model, which is adapted from the Vector Space Model, exploits the use of semantic chromosomes in semantic indexing and assessing the semantic similarity of images within a collection

    Semi-automated co-reference identification in digital humanities collections

    Get PDF
    Locating specific information within museum collections represents a significant challenge for collection users. Even when the collections and catalogues exist in a searchable digital format, formatting differences and the imprecise nature of the information to be searched mean that information can be recorded in a large number of different ways. This variation exists not just between different collections, but also within individual ones. This means that traditional information retrieval techniques are badly suited to the challenges of locating particular information in digital humanities collections and searching, therefore, takes an excessive amount of time and resources. This thesis focuses on a particular search problem, that of co-reference identification. This is the process of identifying when the same real world item is recorded in multiple digital locations. In this thesis, a real world example of a co-reference identification problem for digital humanities collections is identified and explored. In particular the time consuming nature of identifying co-referent records. In order to address the identified problem, this thesis presents a novel method for co-reference identification between digitised records in humanities collections. Whilst the specific focus of this thesis is co-reference identification, elements of the method described also have applications for general information retrieval. The new co-reference method uses elements from a broad range of areas including; query expansion, co-reference identification, short text semantic similarity and fuzzy logic. The new method was tested against real world collections information, the results of which suggest that, in terms of the quality of the co-referent matches found, the new co-reference identification method is at least as effective as a manual search. The number of co-referent matches found however, is higher using the new method. The approach presented here is capable of searching collections stored using differing metadata schemas. More significantly, the approach is capable of identifying potential co-reference matches despite the highly heterogeneous and syntax independent nature of the Gallery, Library Archive and Museum (GLAM) search space and the photo-history domain in particular. The most significant benefit of the new method is, however, that it requires comparatively little manual intervention. A co-reference search using it has, therefore, significantly lower person hour requirements than a manually conducted search. In addition to the overall co-reference identification method, this thesis also presents: • A novel and computationally lightweight short text semantic similarity metric. This new metric has a significantly higher throughput than the current prominent techniques but a negligible drop in accuracy. • A novel method for comparing photographic processes in the presence of variable terminology and inaccurate field information. This is the first computational approach to do so.AHR

    Contribution à la définition de modèles de recherche d'information flexibles basés sur les CP-Nets

    Get PDF
    This thesis addresses two main problems in IR: automatic query weighting and document semantic indexing. Our global contribution consists on the definition of a theoretical flexible information retrieval (IR) model based on CP-Nets. The CP-Net formalism is used for the graphical representation of flexible queries expressing qualitative preferences and for automatic weighting of such queries. Furthermore, the CP-Net formalism is used as an indexing language in order to represent document representative concepts and related relations in a roughly compact way. Concepts are identified by projection on WordNet. Concept relations are discovered by means of semantic association rules. A query evaluation mechanism based on CP-Nets graph similarity is also proposed.Ce travail de thèse adresse deux principaux problèmes en recherche d'information : (1) la formalisation automatique des préférences utilisateur, (ou la pondération automatique de requêtes) et (2) l'indexation sémantique. Dans notre première contribution, nous proposons une approche de recherche d'information (RI) flexible fondée sur l'utilisation des CP-Nets (Conditional Preferences Networks). Le formalisme CP-Net est utilisé d'une part, pour la représentation graphique de requêtes flexibles exprimant des préférences qualitatives et d'autre part pour l'évaluation flexible de la pertinence des documents. Pour l'utilisateur, l'expression de préférences qualitatives est plus simple et plus intuitive que la formulation de poids numériques les quantifiant. Cependant, un système automatisé raisonnerait plus simplement sur des poids ordinaux. Nous proposons alors une approche de pondération automatique des requêtes par quantification des CP-Nets correspondants par des valeurs d'utilité. Cette quantification conduit à un UCP-Net qui correspond à une requête booléenne pondérée. Une utilisation des CP-Nets est également proposée pour la représentation des documents dans la perspective d'une évaluation flexible des requêtes ainsi pondéreés. Dans notre seconde contribution, nous proposons une approche d'indexation conceptuelle basée sur les CP-Nets. Nous proposons d'utiliser le formalisme CP-Net comme langage d'indexation afin de représenter les concepts et les relations conditionnelles entre eux d'une manière relativement compacte. Les noeuds du CP-Net sont les concepts représentatifs du contenu du document et les relations entre ces noeuds expriment les associations conditionnelles qui les lient. Notre contribution porte sur un double aspect : d'une part, nous proposons une approche d'extraction des concepts en utilisant WordNet. Les concepts résultants forment les noeuds du CP-Net. D'autre part, nous proposons d'étendre et d'utiliser la technique de règles d'association afin de découvrir les relations conditionnelles entre les concepts noeuds du CP-Nets. Nous proposons enfin un mécanisme d'évaluation des requêtes basé sur l'appariement de graphes (les CP-Nets document et requête en l'occurrence)

    Sentence-level sentiment tagging across different domains and genres

    Get PDF
    The demand for information about sentiment expressed in texts has stimulated a growing interest into automatic sentiment analysis in Natural Language Processing (NLP). This dissertation is motivated by an unmet need for high-performance domain-independent sentiment taggers and by pressing theoretical questions in NLP, where the exploration of limitations of specific approaches, as well as synergies between them, remain practically unaddressed. This study focuses on sentiment tagging at the sentence level and covers four genres: news, blogs, movie reviews, and product reviews. It draws comparisons between sentiment annotation at different linguistic levels (words, sentences, and texts) and highlights the key differences between supervised machine learning methods that rely on annotated corpora (corpus-based, CBA) and lexicon-based approaches (LBA) to sentiment tagging. Exploring the performance of supervised corpus-based approach to sentiment tagging, this study highlights the strong domain-dependence of the CBA. I present the development of LBA approaches based on general lexicons, such as WordNet, as a potential solution to the domain portability problem. A system for sentiment marker extraction from WordNet's relations and glosses is developed and used to acquire lists for a lexicon-based system for sentiment annotation at the sentence and text levels. It demonstrates that LBA's performance across domains is more stable than that of CBA. Finally, the study proposes an integration of LBA and CBA in an ensemble of classifiers using a precision-based voting technique that allows the ensemble system to incorporate the best features of both CBA and LBA. This combined approach outperforms both base learners and provides a promising solution to the domain-adaptation problem. The study contributes to NLP (1) by developing algorithms for automatic acquisition of sentiment-laden words from dictionary definitions; (2) by conducting a systematic study of approaches to sentiment classification and of factors affecting their performance; (3) by refining the lexicon-based approach by introducing valence shifter handling and parse tree information; and (4) by development of the combined, CBA/LBA approach that brings together the strengths of the two approaches and allows domain-adaptation with limited amounts of labeled training data

    Knowledge-Based Task Structure Planning for an Information Gathering Agent

    Get PDF
    An effective solution to model and apply planning domain knowledge for deliberation and action in probabilistic, agent-oriented control is presented. Specifically, the addition of a task structure planning component and supporting components to an agent-oriented architecture and agent implementation is described. For agent control in risky or uncertain environments, an approach and method of goal reduction to task plan sets and schedules of action is presented. Additionally, some issues related to component-wise, situation-dependent control of a task planning agent that schedules its tasks separately from planning them are motivated and discussed

    Complementing WordNet with Roget's and corpus-based thesauri for information retrieval

    No full text

    Электронные библиотеки: перспективные методы и технологии, электронные коллекции

    Get PDF
    Электронные библиотеки – область исследований и разработок, направленных на развитие теории и практики обработки, распространения, хранения, анализа и поиска цифровых данных различной природы. Основная цель серии конференций RCDL заключается в формировании сообщества специалистов России, ведущих исследования и разработки в области электронных библиотек и близких областях. Всероссийская научная конференция 2009 г. (RCDL'2009) является одиннадцатой конференцией по данной тематике (1999 г. – Санкт-Петербург, 2000 г. – Протвино, 2001 г. – Петрозаводск, 2002 г. – Дубна, 2003 г. – Санкт-Петербург, 2004 г. – Пущино, 2005 г. – Ярославль, 2006 г. – Суздаль, 2007 г. – Переславль-Залесский, 2008 г. – Дубна). Настоящий сборник включает тексты докладов, коротких сообщений и стендовых докладов, отобранных Программным комитетом RCDL'2009 в результате проведенного рецензирования
    corecore