59,853 research outputs found
Bayesian analysis of wandering vector models for displaying ranking data
In a process of examining k objects, each judge provides a ranking of them. The aim of this paper is to investigate a probabilistic model for ranking data - the wandering vector model. The model represents objects by points in a d-dimensional space, and the judges are represented by latent vectors emanating from the origin in the same space. Each judge samples a vector from a multivariate normal distribution; given this vector, the judge's utility assigned to an object is taken to be the length of the orthogonal projection of the object point onto the judge vector, plus a normally distributed random error. The ordering of the k utilities given by the judge determines the judge's ranking. A Bayesian approach and the Gibbs sampling technique are used for parameter estimation. The method of computing the marginal likelihood proposed by Chib (1995) is used to select the dimensionality of the model. Simulations are done to demonstrate the proposed estimation and model selection method. We then analyze the Goldberg data, in which 10 occupations are ranked according to the degree of social prestige.published_or_final_versio
Probabilistic performance estimators for computational chemistry methods: the empirical cumulative distribution function of absolute errors
Benchmarking studies in computational chemistry use reference datasets to
assess the accuracy of a method through error statistics. The commonly used
error statistics, such as the mean signed and mean unsigned errors, do not
inform end-users on the expected amplitude of prediction errors attached to
these methods. We show that, the distributions of model errors being neither
normal nor zero-centered, these error statistics cannot be used to infer
prediction error probabilities. To overcome this limitation, we advocate for
the use of more informative statistics, based on the empirical cumulative
distribution function of unsigned errors, namely (1) the probability for a new
calculation to have an absolute error below a chosen threshold, and (2) the
maximal amplitude of errors one can expect with a chosen high confidence level.
Those statistics are also shown to be well suited for benchmarking and ranking
studies. Moreover, the standard error on all benchmarking statistics depends on
the size of the reference dataset. Systematic publication of these standard
errors would be very helpful to assess the statistical reliability of
benchmarking conclusions.Comment: Supplementary material: https://github.com/ppernot/ECDF
Scalable Probabilistic Similarity Ranking in Uncertain Databases (Technical Report)
This paper introduces a scalable approach for probabilistic top-k similarity
ranking on uncertain vector data. Each uncertain object is represented by a set
of vector instances that are assumed to be mutually-exclusive. The objective is
to rank the uncertain data according to their distance to a reference object.
We propose a framework that incrementally computes for each object instance and
ranking position, the probability of the object falling at that ranking
position. The resulting rank probability distribution can serve as input for
several state-of-the-art probabilistic ranking models. Existing approaches
compute this probability distribution by applying a dynamic programming
approach of quadratic complexity. In this paper we theoretically as well as
experimentally show that our framework reduces this to a linear-time complexity
while having the same memory requirements, facilitated by incremental accessing
of the uncertain vector instances in increasing order of their distance to the
reference object. Furthermore, we show how the output of our method can be used
to apply probabilistic top-k ranking for the objects, according to different
state-of-the-art definitions. We conduct an experimental evaluation on
synthetic and real data, which demonstrates the efficiency of our approach
Web Site Personalization based on Link Analysis and Navigational Patterns
The continuous growth in the size and use of the World Wide Web imposes new methods of design and development of on-line information services. The need for predicting the users’ needs in order to improve the usability and user retention of a web site is more than evident and can be addressed by personalizing it. Recommendation algorithms aim at proposing “next” pages to users based on their current visit and the past users’ navigational patterns. In the vast majority of related algorithms, however, only the usage data are used to produce recommendations, disregarding the structural properties of the web graph. Thus important – in terms of PageRank authority score – pages may be underrated. In this work we present UPR, a PageRank-style algorithm which combines usage data and link analysis techniques for assigning probabilities to the web pages based on their importance in the web site’s navigational graph. We propose the application of a localized version of UPR (l-UPR) to personalized navigational sub-graphs for online web page ranking and recommendation. Moreover, we propose a hybrid probabilistic predictive model based on Markov models and link analysis for assigning prior probabilities in a hybrid probabilistic model. We prove, through experimentation, that this approach results in more objective and representative predictions than the ones produced from the pure usage-based approaches
Neural Networks for Information Retrieval
Machine learning plays a role in many aspects of modern IR systems, and deep
learning is applied in all of them. The fast pace of modern-day research has
given rise to many different approaches for many different IR problems. The
amount of information available can be overwhelming both for junior students
and for experienced researchers looking for new research topics and directions.
Additionally, it is interesting to see what key insights into IR problems the
new technologies are able to give us. The aim of this full-day tutorial is to
give a clear overview of current tried-and-trusted neural methods in IR and how
they benefit IR research. It covers key architectures, as well as the most
promising future directions.Comment: Overview of full-day tutorial at SIGIR 201
Distributed Learning from Interactions in Social Networks
We consider a network scenario in which agents can evaluate each other
according to a score graph that models some interactions. The goal is to design
a distributed protocol, run by the agents, that allows them to learn their
unknown state among a finite set of possible values. We propose a Bayesian
framework in which scores and states are associated to probabilistic events
with unknown parameters and hyperparameters, respectively. We show that each
agent can learn its state by means of a local Bayesian classifier and a
(centralized) Maximum-Likelihood (ML) estimator of parameter-hyperparameter
that combines plain ML and Empirical Bayes approaches. By using tools from
graphical models, which allow us to gain insight on conditional dependencies of
scores and states, we provide a relaxed probabilistic model that ultimately
leads to a parameter-hyperparameter estimator amenable to distributed
computation. To highlight the appropriateness of the proposed relaxation, we
demonstrate the distributed estimators on a social interaction set-up for user
profiling.Comment: This submission is a shorter work (for conference publication) of a
more comprehensive paper, already submitted as arXiv:1706.04081 (under review
for journal publication). In this short submission only one social set-up is
considered and only one of the relaxed estimators is proposed. Moreover, the
exhaustive analysis, carried out in the longer manuscript, is completely
missing in this versio
- …