49 research outputs found
Evaluation of taxonomic and neural embedding methods for calculating semantic similarity
Modelling semantic similarity plays a fundamental role in lexical semantic
applications. A natural way of calculating semantic similarity is to access
handcrafted semantic networks, but similarity prediction can also be
anticipated in a distributional vector space. Similarity calculation continues
to be a challenging task, even with the latest breakthroughs in deep neural
language models. We first examined popular methodologies in measuring taxonomic
similarity, including edge-counting that solely employs semantic relations in a
taxonomy, as well as the complex methods that estimate concept specificity. We
further extrapolated three weighting factors in modelling taxonomic similarity.
To study the distinct mechanisms between taxonomic and distributional
similarity measures, we ran head-to-head comparisons of each measure with human
similarity judgements from the perspectives of word frequency, polysemy degree
and similarity intensity. Our findings suggest that without fine-tuning the
uniform distance, taxonomic similarity measures can depend on the shortest path
length as a prime factor to predict semantic similarity; in contrast to
distributional semantics, edge-counting is free from sense distribution bias in
use and can measure word similarity both literally and metaphorically; the
synergy of retrofitting neural embeddings with concept relations in similarity
prediction may indicate a new trend to leverage knowledge bases on transfer
learning. It appears that a large gap still exists on computing semantic
similarity among different ranges of word frequency, polysemous degree and
similarity intensity
Recommended from our members
Integration of search theories and evidential analysis to Web-wide Discovery of information for decision support
The main contribution of this research is that it addresses the issues associated with traditional information gathering and presents a novel semantic approach method to Web-based discovery of previously unknown intelligence for effective decision making. Itprovides a comprehensive theoretical background to the proposed solution together with a demonstration of the effectiveness of the method from results of the experiments, showing how the quality of collected information can be significantly enhanced by previously unknown information derived from the available known facts.
The quality of decisions made in business and government relates directly to the quality of the information used to formulate the decision. This information may be retrieved from an organisation’s knowledge base (Intranet) or from the World Wide Web. The purpose of this thesis is to investigate the specifics of information gathering from these sources. It has studied a number of search techniques that rely on statistical and semantic analysis of unstructured information, and identified benefits and limitations of these techniques. It was concluded that enterprise search technologies can efficiently manipulate Intranet held information, but require complex processing of large amount of textual information, which is not feasible and scalable when applied to the Web.
Based upon the search methods investigations, this thesis introduces a new semantic Web-based search method that automates the correlation of topic-related content for discovery of hitherto unknown information from disparate and widely diverse Web-sources. This method is in contrast to traditional search methods that are constrained to specific or narrowly defined topics. It addresses the three key aspects of the information: semantic closeness to search topic, information completeness, and quality. The method is based on algorithms from Natural Language Processing combined with techniques adapted from grounded theory and Dempster-Shafer theory to significantly enhance the discovery of topic related Web-sourced intelligence.
This thesis also describes the development of the new search solution by showing the integration of the mathematical methods used as well as the development of the working model. Real-world experiments demonstrate the effectiveness of the model with supporting performance analysis, showing that the quality of the extracted content is significantly enhanced comparing to the traditional Web-search approaches
The Gamut: A Journal of Ideas and Information, No. 27, Summer 1989
CONTENTS OF ISSUE NO. 27, SUMMER, 1989
Louis T. Milic: Editorial, 2
The New Library of the Past
Diana Orendi Hinze: Expiation and Repression, 4
German literature and the Nazi past
J. Heywood Alexander: Our Redeemed, Beloved Land , 16
Bands, songs, Lincoln! and the Civil War.
Sylvia Whitman: Mountain Nurses, 25
Kentucky\u27s Frontier Nursing Service for mother and child.
Ron Haybron: Fraud in Science, 33
Can we place our trust in the heirs of Galileo and Pasteur?
Pat Martaus: Feminist Literary Criticism, 45
Social reform or academic language game?
J. E. Vacha: Constance and the Con, 51
The unlikely friendship between a Cleveland career girl and Sing Sing\u27s celebrated editor inmate.
Poetry
Ken Waldman: Three Lessons in Taking Off Clothes, 62
The Non Sequitur at the Intersection of Market and Vine, 63
William Virgil Davis: Still Life, 64
Stratagem, 64
Carolyn Reams Smith: Know Old Caleb, 65
Killing Old Caleb For Grandma Mary, 66
B. A. St. Andrews: The Alchemists, 68
Victoria Neufeldt: Catching Up With the Language, 69
Revising Webster\u27s New World Dictionary
Hannah Gilberg and the editors: A Good Bowl-and a Work of Art, 80
A Portfolio of Ceramics by Theresa Yondo
Review
P K. Saha: The Verbal Workshop, 87
A review of The Wordtree , a thesaurus of ideas
Back Matter
Ken Roby: Rescuing Bentley, 95https://engagedscholarship.csuohio.edu/gamut_archives/1024/thumbnail.jp
The Gamut: A Journal of Ideas and Information, No. 27, Summer 1989
CONTENTS OF ISSUE NO. 27, SUMMER, 1989
Louis T. Milic: Editorial, 2
The New Library of the Past
Diana Orendi Hinze: Expiation and Repression, 4
German literature and the Nazi past
J. Heywood Alexander: Our Redeemed, Beloved Land , 16
Bands, songs, Lincoln! and the Civil War.
Sylvia Whitman: Mountain Nurses, 25
Kentucky\u27s Frontier Nursing Service for mother and child.
Ron Haybron: Fraud in Science, 33
Can we place our trust in the heirs of Galileo and Pasteur?
Pat Martaus: Feminist Literary Criticism, 45
Social reform or academic language game?
J. E. Vacha: Constance and the Con, 51
The unlikely friendship between a Cleveland career girl and Sing Sing\u27s celebrated editor inmate.
Poetry
Ken Waldman: Three Lessons in Taking Off Clothes, 62
The Non Sequitur at the Intersection of Market and Vine, 63
William Virgil Davis: Still Life, 64
Stratagem, 64
Carolyn Reams Smith: Know Old Caleb, 65
Killing Old Caleb For Grandma Mary, 66
B. A. St. Andrews: The Alchemists, 68
Victoria Neufeldt: Catching Up With the Language, 69
Revising Webster\u27s New World Dictionary
Hannah Gilberg and the editors: A Good Bowl-and a Work of Art, 80
A Portfolio of Ceramics by Theresa Yondo
Review
P K. Saha: The Verbal Workshop, 87
A review of The Wordtree , a thesaurus of ideas
Back Matter
Ken Roby: Rescuing Bentley, 95https://engagedscholarship.csuohio.edu/gamut_archives/1024/thumbnail.jp
Retrieving relevant parts from large environmental-related documents
When attempting to consider the environment, a large quantity of information is
available. Historically, librarians have provided a facility for both sorting this
information into storage, and guiding users to the material relevant to their
queries. With the steady increase in volume, detail and character of this
information, existing methods of handling cannot cope.
This thesis addresses this problem by developing a novel information system
framework and applying it to the environmental domain. A brief study was made
of information retrieval systems. An information system. framework was
developed through the project. It covers the areas of query augmentation and
search execution. In particular, the framework considers the issues of: using a domain model to help in specifying queries; and assessing and retrieving sub-parts of large documents.
In order to test the novel concepts, a case study, which covers many steps in the
information retrieval process, was designed and carried out with supportive
results
Introspective knowledge acquisition for case retrieval networks in textual case base reasoning.
Textual Case Based Reasoning (TCBR) aims at effective reuse of information contained in unstructured documents. The key advantage of TCBR over traditional Information Retrieval systems is its ability to incorporate domain-specific knowledge to facilitate case comparison beyond simple keyword matching. However, substantial human intervention is needed to acquire and transform this knowledge into a form suitable for a TCBR system. In this research, we present automated approaches that exploit statistical properties of document collections to alleviate this knowledge acquisition bottleneck. We focus on two important knowledge containers: relevance knowledge, which shows relatedness of features to cases, and similarity knowledge, which captures the relatedness of features to each other. The terminology is derived from the Case Retrieval Network (CRN) retrieval architecture in TCBR, which is used as the underlying formalism in this thesis applied to text classification. Latent Semantic Indexing (LSI) generated concepts are a useful resource for relevance knowledge acquisition for CRNs. This thesis introduces a supervised LSI technique called sprinkling that exploits class knowledge to bias LSI's concept generation. An extension of this idea, called Adaptive Sprinkling has been proposed to handle inter-class relationships in complex domains like hierarchical (e.g. Yahoo directory) and ordinal (e.g. product ranking) classification tasks. Experimental evaluation results show the superiority of CRNs created with sprinkling and AS, not only over LSI on its own, but also over state-of-the-art classifiers like Support Vector Machines (SVM). Current statistical approaches based on feature co-occurrences can be utilized to mine similarity knowledge for CRNs. However, related words often do not co-occur in the same document, though they co-occur with similar words. We introduce an algorithm to efficiently mine such indirect associations, called higher order associations. Empirical results show that CRNs created with the acquired similarity knowledge outperform both LSI and SVM. Incorporating acquired knowledge into the CRN transforms it into a densely connected network. While improving retrieval effectiveness, this has the unintended effect of slowing down retrieval. We propose a novel retrieval formalism called the Fast Case Retrieval Network (FCRN) which eliminates redundant run-time computations to improve retrieval speed. Experimental results show FCRN's ability to scale up over high dimensional textual casebases. Finally, we investigate novel ways of visualizing and estimating complexity of textual casebases that can help explain performance differences across casebases. Visualization provides a qualitative insight into the casebase, while complexity is a quantitative measure that characterizes classification or retrieval hardness intrinsic to a dataset. We study correlations of experimental results from the proposed approaches against complexity measures over diverse casebases
Computing point-of-view : modeling and simulating judgments of taste
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.Includes bibliographical references (p. 153-163).People have rich points-of-view that afford them the ability to judge the aesthetics of people, things, and everyday happenstance; yet viewpoint has an ineffable quality that is hard to articulate in words, let alone capture in computer models. Inspired by cultural theories of taste and identity, this thesis explores end-to-end computational modeling of people's tastes-from model acquisition, to generalization, to application- under various realms. Five aesthetical realms are considered-cultural taste, attitudes, ways of perceiving, taste for food, and sense-of-humor. A person's model is acquired by reading her personal texts, such as a weblog diary, a social network profile, or emails. To generalize a person model, methods such as spreading activation, analogy, and imprimer supplementation are applied to semantic resources and search spaces mined from cultural corpora. Once a generalized model is achieved, a person's tastes are brought to life through perspective-based applications, which afford the exploration of someone else's perspective through interactivity and play. The thesis describes model acquisition systems implemented for each of the five aesthetical realms.(cont.) The techniques of 'reading for affective themes' (RATE), and 'culture mining' are described, along with their enabling technologies, which are commonsense reasoning and textual affect analysis. Finally, six perspective-based applications were implemented to illuminate a range of real-world beneficiaries to person modeling-virtual mentoring, self-reflection, and deep customization.by Xinyu Hugo Liu.Ph.D
The Tiger Vol. 86 Issue 5 1992-09-25
https://tigerprints.clemson.edu/tiger_newspaper/3112/thumbnail.jp
A female-focused design strategy for developing a self-care information system
Ph.DDOCTOR OF PHILOSOPH
Image summarisation: human action description from static images
Dissertação de Mestrado, Processamento de Linguagem Natural e Indústrias da Língua, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2014The object of this master thesis is Image Summarisation and more specifically the automatic human action description from static images. The work has been organised into three main phases, with first one being the data collection, second the actual system implementation and third the system evaluation. The dataset consists of 1287 images depicting human activities belonging in fours semantic categories; "walking a dog", "riding a bike", "riding a horse" and "playing the guitar". The images were manually annotated with an approach based in the idea of crowd sourcing, and the annotation of each sentence is in the form of one or two simple sentences.
The system is composed by two parts, a Content-based Image Retrieval part and a Natural Language Processing part. Given a query image the first part retrieves a set of images perceived as visually similar and the second part processes the annotations following each of the images in order to extract common information by using a graph merging technique of the dependency graphs of the annotated sentences. An optimal path consisting of a subject-verb-complement relation is extracted and transformed into a proper sentence by applying a set of surface processing rules.
The evaluation of the system was carried out in three different ways. Firstly, the Content-based Image Retrieval sub-system was evaluated in terms of precision and recall and compared to a baseline classification system based on randomness. In order to evaluate the Natural Language Processing sub-system, the Image Summarisation task was considered as a machine translation task, and therefore it was evaluated in terms of BLEU score. Given images that correspond to the same semantic as a query image the system output was compared to the corresponding reference summary as provided during the annotation phase, in terms of BLEU score. Finally, the whole system has been qualitatively evaluated by means of a questionnaire.
The conclusions reached by the evaluation is that even if the system does not always capture the right human action and subjects and objects involved in it, it produces understandable and efficient in terms of language summaries.O objetivo desta dissertação é sumarização imagem e, mais especificamente, a geração automática de descrições de ações humanas a partir de imagens estáticas. O trabalho foi organizado em três fases principais: a coleta de dados, a implementação do sistema e, finalmente, a sua avaliação. O conjunto de dados é composto por 1.287 imagens que descrevem atividades humanas pertencentes a quatro categorias semânticas: "passear o cão", "andar de bicicleta", "andar a cavalo" e "tocar guitarra". As imagens foram anotadas manualmente com uma abordagem baseada na ideia de 'crowd-sourcing' e a anotação de cada frase foi feita sob a forma de uma ou duas frases simples.
O sistema é composto por duas partes: uma parte consiste na recuperação de imagens baseada em conteúdo e a outra parte, que envolve Processamento de Língua Natural. Dada uma imagem para procura, a primeira parte recupera um conjunto de imagens percebidas como visualmente semelhantes e a segunda parte processa as anotações associadas a cada uma dessas imagens, a fim de extrair informações comuns, usando uma técnica de fusão de grafos a partir dos grafos de dependência das frases anotadas. Um caminho ideal consistindo numa relação sujeito-verbo-complemento é então extraído desses grafos e transformado numa frase apropriada, pela aplicação de um conjunto de regras de processamento de superfície.
A avaliação do sistema foi realizado de três maneiras diferentes. Em primeiro lugar, o subsistema de recuperação de imagens baseado em conteúdo foi avaliado em termos de precisão e abrangência (recall) e comparado com um limiar de referência (baseline) definido com base num resultado aleatório. A fim de avaliar o subsistema de Processamento de Linguagem Natural, a tarefa de sumarização imagem foi considerada como uma tarefa de tradução automática e foi, portanto, avaliada com base na medida BLEU. Dadas as imagens que correspondem ao mesmo significado da imagem de consulta, a saída do sistema foi comparada com o resumo de referência correspondente, fornecido durante a fase de anotação, utilizando a medida BLEU. Por fim, todo o sistema foi avaliado qualitativamente por meio de um questionário.
Em conclusão, verificou-se que o sistema, apesar de nem sempre capturar corretamente a ação humana e os sujeitos ou objetos envolvidos, produz, no entanto, descrições compreensíveis e e linguisticamente adequadas.Erasmus Mundu