3,955 research outputs found
Improving Term Frequency Normalization for Multi-topical Documents, and Application to Language Modeling Approaches
Term frequency normalization is a serious issue since lengths of documents
are various. Generally, documents become long due to two different reasons -
verbosity and multi-topicality. First, verbosity means that the same topic is
repeatedly mentioned by terms related to the topic, so that term frequency is
more increased than the well-summarized one. Second, multi-topicality indicates
that a document has a broad discussion of multi-topics, rather than single
topic. Although these document characteristics should be differently handled,
all previous methods of term frequency normalization have ignored these
differences and have used a simplified length-driven approach which decreases
the term frequency by only the length of a document, causing an unreasonable
penalization. To attack this problem, we propose a novel TF normalization
method which is a type of partially-axiomatic approach. We first formulate two
formal constraints that the retrieval model should satisfy for documents having
verbose and multi-topicality characteristic, respectively. Then, we modify
language modeling approaches to better satisfy these two constraints, and
derive novel smoothing methods. Experimental results show that the proposed
method increases significantly the precision for keyword queries, and
substantially improves MAP (Mean Average Precision) for verbose queries.Comment: 8 pages, conference paper, published in ECIR '0
Estimating Conditional Mutual Information for Dynamic Feature Selection
Dynamic feature selection, where we sequentially query features to make
accurate predictions with a minimal budget, is a promising paradigm to reduce
feature acquisition costs and provide transparency into the prediction process.
The problem is challenging, however, as it requires both making predictions
with arbitrary feature sets and learning a policy to identify the most valuable
selections. Here, we take an information-theoretic perspective and prioritize
features based on their mutual information with the response variable. The main
challenge is learning this selection policy, and we design a straightforward
new modeling approach that estimates the mutual information in a discriminative
rather than generative fashion. Building on our learning approach, we introduce
several further improvements: allowing variable feature budgets across samples,
enabling non-uniform costs between features, incorporating prior information,
and exploring modern architectures to handle partial input information. We find
that our method provides consistent gains over recent state-of-the-art methods
across a variety of datasets
Feature Selection in the Contrastive Analysis Setting
Contrastive analysis (CA) refers to the exploration of variations uniquely
enriched in a target dataset as compared to a corresponding background dataset
generated from sources of variation that are irrelevant to a given task. For
example, a biomedical data analyst may wish to find a small set of genes to use
as a proxy for variations in genomic data only present among patients with a
given disease (target) as opposed to healthy control subjects (background).
However, as of yet the problem of feature selection in the CA setting has
received little attention from the machine learning community. In this work we
present contrastive feature selection (CFS), a method for performing feature
selection in the CA setting. We motivate our approach with a novel
information-theoretic analysis of representation learning in the CA setting,
and we empirically validate CFS on a semi-synthetic dataset and four real-world
biomedical datasets. We find that our method consistently outperforms
previously proposed state-of-the-art supervised and fully unsupervised feature
selection methods not designed for the CA setting. An open-source
implementation of our method is available at https://github.com/suinleelab/CFS.Comment: NeurIPS 202
- …