4,930 research outputs found
Network Inference from Co-Occurrences
The recovery of network structure from experimental data is a basic and
fundamental problem. Unfortunately, experimental data often do not directly
reveal structure due to inherent limitations such as imprecision in timing or
other observation mechanisms. We consider the problem of inferring network
structure in the form of a directed graph from co-occurrence observations. Each
observation arises from a transmission made over the network and indicates
which vertices carry the transmission without explicitly conveying their order
in the path. Without order information, there are an exponential number of
feasible graphs which agree with the observed data equally well. Yet, the basic
physical principles underlying most networks strongly suggest that all feasible
graphs are not equally likely. In particular, vertices that co-occur in many
observations are probably closely connected. Previous approaches to this
problem are based on ad hoc heuristics. We model the experimental observations
as independent realizations of a random walk on the underlying graph, subjected
to a random permutation which accounts for the lack of order information.
Treating the permutations as missing data, we derive an exact
expectation-maximization (EM) algorithm for estimating the random walk
parameters. For long transmission paths the exact E-step may be computationally
intractable, so we also describe an efficient Monte Carlo EM (MCEM) algorithm
and derive conditions which ensure convergence of the MCEM algorithm with high
probability. Simulations and experiments with Internet measurements demonstrate
the promise of this approach.Comment: Submitted to IEEE Transactions on Information Theory. An extended
version is available as University of Wisconsin Technical Report ECE-06-
On Quantifying Qualitative Geospatial Data: A Probabilistic Approach
Living in the era of data deluge, we have witnessed a web content explosion,
largely due to the massive availability of User-Generated Content (UGC). In
this work, we specifically consider the problem of geospatial information
extraction and representation, where one can exploit diverse sources of
information (such as image and audio data, text data, etc), going beyond
traditional volunteered geographic information. Our ambition is to include
available narrative information in an effort to better explain geospatial
relationships: with spatial reasoning being a basic form of human cognition,
narratives expressing such experiences typically contain qualitative spatial
data, i.e., spatial objects and spatial relationships.
To this end, we formulate a quantitative approach for the representation of
qualitative spatial relations extracted from UGC in the form of texts. The
proposed method quantifies such relations based on multiple text observations.
Such observations provide distance and orientation features which are utilized
by a greedy Expectation Maximization-based (EM) algorithm to infer a
probability distribution over predefined spatial relationships; the latter
represent the quantified relationships under user-defined probabilistic
assumptions. We evaluate the applicability and quality of the proposed approach
using real UGC data originating from an actual travel blog text corpus. To
verify the quality of the result, we generate grid-based maps visualizing the
spatial extent of the various relations
Context-Aware Zero-Shot Recognition
We present a novel problem setting in zero-shot learning, zero-shot object
recognition and detection in the context. Contrary to the traditional zero-shot
learning methods, which simply infers unseen categories by transferring
knowledge from the objects belonging to semantically similar seen categories,
we aim to understand the identity of the novel objects in an image surrounded
by the known objects using the inter-object relation prior. Specifically, we
leverage the visual context and the geometric relationships between all pairs
of objects in a single image, and capture the information useful to infer
unseen categories. We integrate our context-aware zero-shot learning framework
into the traditional zero-shot learning techniques seamlessly using a
Conditional Random Field (CRF). The proposed algorithm is evaluated on both
zero-shot region classification and zero-shot detection tasks. The results on
Visual Genome (VG) dataset show that our model significantly boosts performance
with the additional visual context compared to traditional methods
Multiple-Play Bandits in the Position-Based Model
Sequentially learning to place items in multi-position displays or lists is a
task that can be cast into the multiple-play semi-bandit setting. However, a
major concern in this context is when the system cannot decide whether the user
feedback for each item is actually exploitable. Indeed, much of the content may
have been simply ignored by the user. The present work proposes to exploit
available information regarding the display position bias under the so-called
Position-based click model (PBM). We first discuss how this model differs from
the Cascade model and its variants considered in several recent works on
multiple-play bandits. We then provide a novel regret lower bound for this
model as well as computationally efficient algorithms that display good
empirical and theoretical performance
Messy Data Modelling in Health Care Contingent Valuation Studies
This study addresses the complexity in modeling contingent valuation surveys with true zeros and non-ignorable missing responses including “don’t knows†and protest responses. An endogenous switching tobit model is specified to simultaneously estimate the parameters of the latent willingness to pay (WTP) decision variable and the latent true WTP level. A Bayesian technique is developed using MCMC methods data augmentation and Metropolis Hastings algorithm with Gibbs sampling for estimating the endogenous switching tobit model. The Bayesian approach presented here is useful even for finite sample size and for models with relatively flat likelihood like sample selection models for which convergence is a problem or even if convergence is achieved correlation of the latent random errors are outside the (-1,1) range. The proposed methodology is applied to a single-bounded dichotomous choice contingent valuation model using British Eurowill data on evaluating cancer health care program. Results in this study reveal that the interview interest scores for the unresolved or missing cases are substantially high and not far from scores of “yes†respondents. The pattern in the values of socio-economic and health related variables shows that these unresolved cases are not missing completely at random so that they may actually contain valuable information at least on the willingness decision process of respondents. Inclusion of these unresolved cases is essential to modelling WTP decision and true WTP level as reflected in the higher sum of log conditional predictive ordinate(SLCPO) goodness-of-fit criterion for a cross-validation sample and higher covariance between the latent random errors of the latent self-selection or WTP decision variable and the true WTP level model. The positive covariance and correlation of the latent random errors may explain why the true WTP levels in DC contingent valuation studies are oftentimes overestimated. The model presented in this paper may also be applied to double bounded dichotomous choice models with slight modification.non-ignorable missing values, single-bounded dichotomous choice contingent valuation studies,Markov chain Monte Carlo methods
Probabilistic Models over Ordered Partitions with Application in Learning to Rank
This paper addresses the general problem of modelling and learning rank data
with ties. We propose a probabilistic generative model, that models the process
as permutations over partitions. This results in super-exponential
combinatorial state space with unknown numbers of partitions and unknown
ordering among them. We approach the problem from the discrete choice theory,
where subsets are chosen in a stagewise manner, reducing the state space per
each stage significantly. Further, we show that with suitable parameterisation,
we can still learn the models in linear time. We evaluate the proposed models
on the problem of learning to rank with the data from the recently held Yahoo!
challenge, and demonstrate that the models are competitive against well-known
rivals.Comment: 19 pages, 2 figure
- …