12,019 research outputs found
Off the Beaten Path: Let's Replace Term-Based Retrieval with k-NN Search
Retrieval pipelines commonly rely on a term-based search to obtain candidate
records, which are subsequently re-ranked. Some candidates are missed by this
approach, e.g., due to a vocabulary mismatch. We address this issue by
replacing the term-based search with a generic k-NN retrieval algorithm, where
a similarity function can take into account subtle term associations. While an
exact brute-force k-NN search using this similarity function is slow, we
demonstrate that an approximate algorithm can be nearly two orders of magnitude
faster at the expense of only a small loss in accuracy. A retrieval pipeline
using an approximate k-NN search can be more effective and efficient than the
term-based pipeline. This opens up new possibilities for designing effective
retrieval pipelines. Our software (including data-generating code) and
derivative data based on the Stack Overflow collection is available online
Learning a Deep Listwise Context Model for Ranking Refinement
Learning to rank has been intensively studied and widely applied in
information retrieval. Typically, a global ranking function is learned from a
set of labeled data, which can achieve good performance on average but may be
suboptimal for individual queries by ignoring the fact that relevant documents
for different queries may have different distributions in the feature space.
Inspired by the idea of pseudo relevance feedback where top ranked documents,
which we refer as the \textit{local ranking context}, can provide important
information about the query's characteristics, we propose to use the inherent
feature distributions of the top results to learn a Deep Listwise Context Model
that helps us fine tune the initial ranked list. Specifically, we employ a
recurrent neural network to sequentially encode the top results using their
feature vectors, learn a local context model and use it to re-rank the top
results. There are three merits with our model: (1) Our model can capture the
local ranking context based on the complex interactions between top results
using a deep neural network; (2) Our model can be built upon existing
learning-to-rank methods by directly using their extracted feature vectors; (3)
Our model is trained with an attention-based loss function, which is more
effective and efficient than many existing listwise methods. Experimental
results show that the proposed model can significantly improve the
state-of-the-art learning to rank methods on benchmark retrieval corpora
Simulated evaluation of faceted browsing based on feature selection
In this paper we explore the limitations of facet based browsing which uses sub-needs of an information need for querying and organising the search process in video retrieval. The underlying assumption of this approach is that the search effectiveness will be enhanced if such an approach is employed for interactive video retrieval using textual and visual features. We explore the performance bounds of a faceted system by carrying out a simulated user evaluation on TRECVid data sets, and also on the logs of a prior user experiment with the system. We first present a methodology to reduce the dimensionality of features by selecting the most important ones. Then, we discuss the simulated evaluation strategies employed in our evaluation and the effect on the use of both textual and visual features. Facets created by users are simulated by clustering video shots using textual and visual features. The experimental results of our study demonstrate that the faceted browser can potentially improve the search effectiveness
Enhancing the Performance of Text Mining
The amount of text data produced in science, finance, social media, and medicine is growing at an unprecedented pace. The raw text data typically introduces major computational and analytical obstacles (e.g., extremely high dimensionality) to data mining and machine learning algorithms. Besides, the growth in the size of text data makes the search process more difficult for information retrieval systems, making retrieving relevant results to match the users’ search queries challenging. Moreover, the availability of text data in different languages creates the need to develop new methods to analyze multilingual topics to help policymakers in governmental and health systems to make risk decisions and to create policies to respond to public health crises, natural disasters, and political or social movements. The goal of this thesis is to develop new methods that handle computational and analytical problems for complex high-dimensional text data, develop a new query expansion approach to enhance the performance of information retrieval systems, and to present new techniques for analyzing multilingual topics using a translation service.
First, in the field of dimensionality reduction, we develop a new method for detecting and eliminating domain-based words. In this method, we use three different datasets and five classifiers for testing and evaluating the performance of our new approach before and after eliminating domain-based words. We compare the performance of our approach with other feature selection methods. We find that the new approach improves the performance of the binary classifier and reduces the dimensionality of the feature space by 90%. Also, our approach reduces the execution time of the classifier and outperforms one of the feature selection methods.
Second, in the field of information retrieval, we design and implement a method that integrates words from a current stream with external data sources in order to predict the occurrence of relevant words that have not yet appeared in the primary source. This algorithm enables the construction of new queries that effectively capture emergent events that a user may not have anticipated when initiating the data collection stream. The added value of using the external data sources appears when we have a stream of data and we want to predict something that has not yet happened instead of using only the stream that is limited to the available information at a specific time. We compare the performance of our approach with two alternative approaches. The first approach (static) expands user queries with words extracted from a probabilistic topic model of the stream. The second approach (emergent) reinforces user queries with emergent words extracted from the stream. We find that our method outperforms alternative approaches, exhibiting particularly good results in identifying future emergent topics.
Third, in the field of the multilingual text, we present a strategy to analyze the similarity between multilingual topics in English and Arabic tweets surrounding the 2020 COVID-19 pandemic. We make a descriptive comparison between topics in Arabic and English tweets about COVID-19 using tweets collected in the same way and filtered using the same keywords. We analyze Twitter’s discussion to understand the evolution of topics over time and reveal topic similarity among tweets across the datasets. We use probabilistic topic modeling to identify and extract the key topics of Twitter’s discussion in Arabic and English tweets. We use two methods to analyze the similarity between multilingual topics. The first method (full-text topic modeling approach) translates all text to English and then runs topic modeling to find similar topics. The second method (term-based topic modeling approach) runs topic modeling on the text before translation then translates the top keywords in each topic to find similar topics. We find similar topics related to COVID-19 pandemic covered in English and Arabic tweets for certain time intervals. Results indicate that the term-based topic modeling approach can reduce the cost compared to the full-text topic modeling approach and still have comparable results in finding similar topics. The computational time to translate the terms is significantly lower than the translation of the full text
An Incremental Construction of Deep Neuro Fuzzy System for Continual Learning of Non-stationary Data Streams
Existing FNNs are mostly developed under a shallow network configuration
having lower generalization power than those of deep structures. This paper
proposes a novel self-organizing deep FNN, namely DEVFNN. Fuzzy rules can be
automatically extracted from data streams or removed if they play limited role
during their lifespan. The structure of the network can be deepened on demand
by stacking additional layers using a drift detection method which not only
detects the covariate drift, variations of input space, but also accurately
identifies the real drift, dynamic changes of both feature space and target
space. DEVFNN is developed under the stacked generalization principle via the
feature augmentation concept where a recently developed algorithm, namely
gClass, drives the hidden layer. It is equipped by an automatic feature
selection method which controls activation and deactivation of input attributes
to induce varying subsets of input features. A deep network simplification
procedure is put forward using the concept of hidden layer merging to prevent
uncontrollable growth of dimensionality of input space due to the nature of
feature augmentation approach in building a deep network structure. DEVFNN
works in the sample-wise fashion and is compatible for data stream
applications. The efficacy of DEVFNN has been thoroughly evaluated using seven
datasets with non-stationary properties under the prequential test-then-train
protocol. It has been compared with four popular continual learning algorithms
and its shallow counterpart where DEVFNN demonstrates improvement of
classification accuracy. Moreover, it is also shown that the concept drift
detection method is an effective tool to control the depth of network structure
while the hidden layer merging scenario is capable of simplifying the network
complexity of a deep network with negligible compromise of generalization
performance.Comment: This paper has been published in IEEE Transactions on Fuzzy System
Cross-view Embeddings for Information Retrieval
In this dissertation, we deal with the cross-view tasks related to information retrieval
using embedding methods. We study existing methodologies and propose new methods to overcome their limitations. We formally introduce the concept of mixed-script
IR, which deals with the challenges faced by an IR system when a language is written
in different scripts because of various technological and sociological factors. Mixed-script terms are represented by a small and finite feature space comprised of character
n-grams. We propose the cross-view autoencoder (CAE) to model such terms in an
abstract space and CAE provides the state-of-the-art performance.
We study a wide variety of models for cross-language information retrieval (CLIR)
and propose a model based on compositional neural networks (XCNN) which overcomes the limitations of the existing methods and achieves the best results for many
CLIR tasks such as ad-hoc retrieval, parallel sentence retrieval and cross-language
plagiarism detection. We empirically test the proposed models for these tasks on
publicly available datasets and present the results with analyses.
In this dissertation, we also explore an effective method to incorporate contextual
similarity for lexical selection in machine translation. Concretely, we investigate a
feature based on context available in source sentence calculated using deep autoencoders. The proposed feature exhibits statistically significant improvements over the
strong baselines for English-to-Spanish and English-to-Hindi translation tasks.
Finally, we explore the the methods to evaluate the quality of autoencoder generated representations of text data and analyse its architectural properties. For this,
we propose two metrics based on reconstruction capabilities of the autoencoders:
structure preservation index (SPI) and similarity accumulation index (SAI). We also
introduce a concept of critical bottleneck dimensionality (CBD) below which the
structural information is lost and present analyses linking CBD and language perplexity.En esta disertación estudiamos problemas de vistas-múltiples relacionados con la recuperación de información utilizando técnicas de representación en espacios de baja dimensionalidad. Estudiamos las técnicas existentes y proponemos nuevas técnicas para solventar algunas de las limitaciones existentes. Presentamos formalmente el concepto de recuperación de información con escritura mixta, el cual trata las dificultades de los sistemas de recuperación de información cuando los textos contienen escrituras en distintos alfabetos debido a razones tecnológicas y socioculturales. Las palabras en escritura mixta son representadas en un espacio de caracterÃsticas finito y reducido, compuesto por n-gramas de caracteres. Proponemos los auto-codificadores de vistas-múltiples (CAE, por sus siglas en inglés) para modelar dichas palabras en un espacio abstracto, y esta técnica produce resultados de vanguardia.
En este sentido, estudiamos varios modelos para la recuperación de información entre lenguas diferentes (CLIR, por sus siglas en inglés) y proponemos un modelo basado en redes neuronales composicionales (XCNN, por sus siglas en inglés), el cual supera las limitaciones de los métodos existentes. El método de XCNN propuesto produce mejores resultados en diferentes tareas de CLIR tales como la recuperación de información ad-hoc, la identificación de oraciones equivalentes en lenguas distintas y la detección de plagio entre lenguas diferentes. Para tal efecto, realizamos pruebas experimentales para dichas tareas sobre conjuntos de datos disponibles públicamente, presentando los resultados y análisis correspondientes.
En esta disertación, también exploramos un método eficiente para utilizar similitud semántica de contextos en el proceso de selección léxica en traducción automática. EspecÃficamente, proponemos caracterÃsticas extraÃdas de los contextos disponibles en las oraciones fuentes mediante el uso de auto-codificadores. El uso de las caracterÃsticas propuestas demuestra mejoras estadÃsticamente significativas sobre sistemas de traducción robustos para las tareas de traducción entre inglés y español, e inglés e hindú.
Finalmente, exploramos métodos para evaluar la calidad de las representaciones de datos de texto generadas por los auto-codificadores, a la vez que analizamos las propiedades de sus arquitecturas. Como resultado, proponemos dos nuevas métricas para cuantificar la calidad de las reconstrucciones generadas por los auto-codificadores: el Ãndice de preservación de estructura (SPI, por sus siglas en inglés) y el Ãndice de acumulación de similitud (SAI, por sus siglas en inglés). También presentamos el concepto de dimensión crÃtica de cuello de botella (CBD, por sus siglas en inglés), por debajo de la cual la información estructural se deteriora. Mostramos que, interesantemente, la CBD está relacionada con la perplejidad de la lengua.En aquesta dissertació estudiem els problemes de vistes-múltiples relacionats amb la recuperació d'informació utilitzant tècniques de representació en espais de baixa dimensionalitat. Estudiem les tècniques existents i en proposem unes de noves per solucionar algunes de les limitacions existents. Presentem formalment el concepte de recuperació d'informació amb escriptura mixta, el qual tracta les dificultats dels sistemes de recuperació d'informació quan els textos contenen escriptures en diferents alfabets per motius tecnològics i socioculturals. Les paraules en escriptura mixta són representades en un espai de caracterÃstiques finit i reduït, composat per n-grames de carà cters. Proposem els auto-codificadors de vistes-múltiples (CAE, per les seves sigles en anglès) per modelar aquestes paraules en un espai abstracte, i aquesta tècnica produeix resultats d'avantguarda.
En aquest sentit, estudiem diversos models per a la recuperació d'informació entre llengües diferents (CLIR , per les sevas sigles en anglès) i proposem un model basat en xarxes neuronals composicionals (XCNN, per les sevas sigles en anglès), el qual supera les limitacions dels mètodes existents. El mètode de XCNN proposat produeix millors resultats en diferents tasques de CLIR com ara la recuperació d'informació ad-hoc, la identificació d'oracions equivalents en llengües diferents, i la detecció de plagi entre llengües diferents. Per a tal efecte, realitzem proves experimentals per aquestes tasques sobre conjunts de dades disponibles públicament, presentant els resultats i anà lisis corresponents.
En aquesta dissertació, també explorem un mètode eficient per utilitzar similitud semà ntica de contextos en el procés de selecció lèxica en traducció automà tica. EspecÃficament, proposem caracterÃstiques extretes dels contextos disponibles a les oracions fonts mitjançant l'ús d'auto-codificadors. L'ús de les caracterÃstiques proposades demostra millores estadÃsticament significatives sobre sistemes de traducció robustos per a les tasques de traducció entre anglès i espanyol, i anglès i hindú.
Finalment, explorem mètodes per avaluar la qualitat de les representacions de dades de text generades pels auto-codificadors, alhora que analitzem les propietats de les seves arquitectures. Com a resultat, proposem dues noves mètriques per quantificar la qualitat de les reconstruccions generades pels auto-codificadors: l'Ãndex de preservació d'estructura (SCI, per les seves sigles en anglès) i l'Ãndex d'acumulació de similitud (SAI, per les seves sigles en anglès). També presentem el concepte de dimensió crÃtica de coll d'ampolla (CBD, per les seves sigles en anglès), per sota de la qual la informació estructural es deteriora. Mostrem que, de manera interessant, la CBD està relacionada amb la perplexitat de la llengua.Gupta, PA. (2017). Cross-view Embeddings for Information Retrieval [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/78457TESI
From Frequency to Meaning: Vector Space Models of Semantics
Computers understand very little of the meaning of human language. This
profoundly limits our ability to give instructions to computers, the ability of
computers to explain their actions to us, and the ability of computers to
analyse and process text. Vector space models (VSMs) of semantics are beginning
to address these limits. This paper surveys the use of VSMs for semantic
processing of text. We organize the literature on VSMs according to the
structure of the matrix in a VSM. There are currently three broad classes of
VSMs, based on term-document, word-context, and pair-pattern matrices, yielding
three classes of applications. We survey a broad range of applications in these
three categories and we take a detailed look at a specific open source project
in each category. Our goal in this survey is to show the breadth of
applications of VSMs for semantics, to provide a new perspective on VSMs for
those who are already familiar with the area, and to provide pointers into the
literature for those who are less familiar with the field
- …