45 research outputs found
DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer
We have witnessed rapid evolution of deep neural network architecture design
in the past years. These latest progresses greatly facilitate the developments
in various areas such as computer vision and natural language processing.
However, along with the extraordinary performance, these state-of-the-art
models also bring in expensive computational cost. Directly deploying these
models into applications with real-time requirement is still infeasible.
Recently, Hinton etal. have shown that the dark knowledge within a powerful
teacher model can significantly help the training of a smaller and faster
student network. These knowledge are vastly beneficial to improve the
generalization ability of the student model. Inspired by their work, we
introduce a new type of knowledge -- cross sample similarities for model
compression and acceleration. This knowledge can be naturally derived from deep
metric learning model. To transfer them, we bring the "learning to rank"
technique into deep metric learning formulation. We test our proposed DarkRank
method on various metric learning tasks including pedestrian re-identification,
image retrieval and image clustering. The results are quite encouraging. Our
method can improve over the baseline method by a large margin. Moreover, it is
fully compatible with other existing methods. When combined, the performance
can be further boosted
Diversification Based Static Index Pruning - Application to Temporal Collections
Nowadays, web archives preserve the history of large portions of the web. As
medias are shifting from printed to digital editions, accessing these huge
information sources is drawing increasingly more attention from national and
international institutions, as well as from the research community. These
collections are intrinsically big, leading to index files that do not fit into
the memory and an increase query response time. Decreasing the index size is a
direct way to decrease this query response time.
Static index pruning methods reduce the size of indexes by removing a part of
the postings. In the context of web archives, it is necessary to remove
postings while preserving the temporal diversity of the archive. None of the
existing pruning approaches take (temporal) diversification into account.
In this paper, we propose a diversification-based static index pruning
method. It differs from the existing pruning approaches by integrating
diversification within the pruning context. We aim at pruning the index while
preserving retrieval effectiveness and diversity by pruning while maximizing a
given IR evaluation metric like DCG. We show how to apply this approach in the
context of web archives. Finally, we show on two collections that search
effectiveness in temporal collections after pruning can be improved using our
approach rather than diversity oblivious approaches
The P-Norm Push: A Simple Convex Ranking Algorithm that Concentrates at the Top of the List
We are interested in supervised ranking algorithms that perform especially well near the top of the
ranked list, and are only required to perform sufficiently well on the rest of the list. In this work,
we provide a general form of convex objective that gives high-scoring examples more importance.
This “push” near the top of the list can be chosen arbitrarily large or small, based on the preference
of the user. We choose â„“p-norms to provide a specific type of push; if the user sets p larger, the
objective concentrates harder on the top of the list. We derive a generalization bound based on
the p-norm objective, working around the natural asymmetry of the problem. We then derive a
boosting-style algorithm for the problem of ranking with a push at the top. The usefulness of the
algorithm is illustrated through experiments on repository data. We prove that the minimizer of the
algorithm’s objective is unique in a specific sense. Furthermore, we illustrate how our objective is
related to quality measurements for information retrieval
Évaluation de la pertinence dans les moteurs de recherche géoréférencés
National audienceLearning to rank documents on a search engine requires relevance judgments. We introduce the results of an innovating study on relevance modeling for local search engines. These search engines present search results on a map or as a list of maps. Each map contains all the attributes of a place (noun, address, phone number, etc). Most of these attributes are links users can click. We model the relevance as the weighted sum of all the clicks on a result. We obtain good results by fixing the same weight for each component of the model. We propose a relative order between clicks to determine the optimal weights.Optimiser le classement des résultats d’un moteur par un algorithme de learning to rank nécessite de connaître des jugements de pertinence entre requêtes et documents. Nous présentons les résultats d’une étude pilote sur la modélisation de la pertinence dans les moteurs de recherche géoréférencés. La particularité de ces moteurs est de présenter les résultats de recherche sous forme de carte géographique ou de liste de fiches. Ces fiches contiennent les caractéristiques du lieu (nom, adresse, téléphone, etc.) dont la plupart sont cliquables par l’utilisateur. Nous modélisons la pertinence comme la somme pondérée des clics sur le résultat. Nous montrons qu’équipondérer les différents éléments du modèle donne de bons résultats et qu’un ordre d’importance entre type de clics peut être déduit pour déterminer les pondérations optimales