1,038 research outputs found

    HD-Index: Pushing the Scalability-Accuracy Boundary for Approximate kNN Search in High-Dimensional Spaces

    Full text link
    Nearest neighbor searching of large databases in high-dimensional spaces is inherently difficult due to the curse of dimensionality. A flavor of approximation is, therefore, necessary to practically solve the problem of nearest neighbor search. In this paper, we propose a novel yet simple indexing scheme, HD-Index, to solve the problem of approximate k-nearest neighbor queries in massive high-dimensional databases. HD-Index consists of a set of novel hierarchical structures called RDB-trees built on Hilbert keys of database objects. The leaves of the RDB-trees store distances of database objects to reference objects, thereby allowing efficient pruning using distance filters. In addition to triangular inequality, we also use Ptolemaic inequality to produce better lower bounds. Experiments on massive (up to billion scale) high-dimensional (up to 1000+) datasets show that HD-Index is effective, efficient, and scalable.Comment: PVLDB 11(8):906-919, 201

    Multimedia Answering and Retrieval System based on CQA with Media Query Generation

    Get PDF
    The question answering system which has recently received an attention from the various information retrieval systems, machine learning, information extraction and the natural language processing the goal of the QAS is to retrieve the answer to the question than full documents. This question answering system which works on the various modules related only to the question processing, the document processing, and the answer processing. This QAS which doesn’t work properly with the main module which is questioning processing this system fails to categorize properly the questions. So to overcome the QAS the Community question answering (CQA) has gained popularity. As compare to QAS and automated QA sites the CQA sites are more effective. In this drawback available for community question answering system is that it only provides the textual answer. Here in this paper, we propose a scheme that enhances the textual answer with the multimedia data. The outline of Community question answering which mainly consists of three components: the selection of answer medium, the query generation for multimedia search and the selection and presentation of multimedia data. This approach automatically defines which type of media information should be added for the textual answer. Then it automatically collects the data from the web to supplement the answer.by handling an available dataset of QA pairs and adding them to a pool, in this, our approach is to allow a new multimedia question answering (MMQA) approach so as the users can find the answer in multimedia matching the questions pair those in the pool. Therefore, the users can approach MMQA from Web information will answer the questions in different media formats (text, video, and image) as particularly selected by the users

    N-gram Based Text Categorization Method for Improved Data Mining

    Get PDF
    Though naïve Bayes text classifiers are widely used because of its simplicity and effectiveness, the techniques for improving performances of these classifiers have been rarely studied. Naïve Bayes classifiers which are widely used for text classification in machine learning are based on the conditional probability of features belonging to a class, which the features are selected by feature selection methods. However, its performance is often imperfect because it does not model text well, and by inappropriate feature selection and some disadvantages of the Naive Bayes itself. Sentiment Classification or Text Classification is the act of taking a set of labeled text documents, learning a correlation between a document’s contents and its corresponding labels and then predicting the labels of a set of unlabeled test documents as best as possible. Text Classification is also sometimes called Text Categorization. Text classification has many applications in natural language processing tasks such as E-mail filtering, Intrusion detection systems, news filtering, prediction of user preferences, and organization of documents. The Naive Bayes model makes strong assumptions about the data: it assumes that words in a document are independent. This assumption is clearly violated in natural language text: there are various types of dependences between words induced by the syntactic, semantic, pragmatic and conversational structure of a text. Also, the particular form of the probabilistic model makes assumptions about the distribution of words in documents that are violated in practice. We address this problem and show that it can be solved by modeling text data differently using N-Grams. N-gram Based Text Categorization is a simple method based on statistical information about the usage of sequences of words. We conducted an experiment to demonstrate that our simple modification is able to improve the performance of Naive Bayes for text classification significantly. Keywords: Data Mining, Text Classification, Text Categorization, Naïve Bayes, N-Grams

    Acoustic Word Embeddings for Untranscribed Target Languages with Continued Pretraining and Learned Pooling

    Full text link
    Acoustic word embeddings are typically created by training a pooling function using pairs of word-like units. For unsupervised systems, these are mined using k-nearest neighbor (KNN) search, which is slow. Recently, mean-pooled representations from a pre-trained self-supervised English model were suggested as a promising alternative, but their performance on target languages was not fully competitive. Here, we explore improvements to both approaches: we use continued pre-training to adapt the self-supervised model to the target language, and we use a multilingual phone recognizer (MPR) to mine phone n-gram pairs for training the pooling function. Evaluating on four languages, we show that both methods outperform a recent approach on word discrimination. Moreover, the MPR method is orders of magnitude faster than KNN, and is highly data efficient. We also show a small improvement from performing learned pooling on top of the continued pre-trained representations.Comment: Accepted to Interspeech 202
    • …
    corecore