345 research outputs found
A latent variable ranking model for content-based retrieval
34th European Conference on IR Research, ECIR 2012, Barcelona, Spain, April 1-5, 2012. ProceedingsSince their introduction, ranking SVM models [11] have become a powerful tool for training content-based retrieval systems. All we need for training a model are retrieval examples in the form of triplet constraints, i.e. examples specifying that relative to some query, a database item a should be ranked higher than database item b. These types of constraints could be obtained from feedback of users of the retrieval system. Most previous ranking models learn either a global combination of elementary similarity functions or a combination defined with respect to a single database item. Instead, we propose a “coarse to fine” ranking model where given a query we first compute a distribution over “coarse” classes and then use the linear combination that has been optimized for queries of that class. These coarse classes are hidden and need to be induced by the training algorithm. We propose a latent variable ranking model that induces both the latent classes and the weights of the linear combination for each class from ranking triplets. Our experiments over two large image datasets and a text retrieval dataset show the advantages of our model over learning a global combination as well as a combination for each test point (i.e. transductive setting). Furthermore, compared to the transductive approach our model has a clear computational advantages since it does not need to be retrained for each test query.Spanish Ministry of Science and Innovation (JCI-2009-04240)EU PASCAL2 Network of Excellence (FP7-ICT-216886
Computing in Additive Networks with Bounded-Information Codes
This paper studies the theory of the additive wireless network model, in
which the received signal is abstracted as an addition of the transmitted
signals. Our central observation is that the crucial challenge for computing in
this model is not high contention, as assumed previously, but rather
guaranteeing a bounded amount of \emph{information} in each neighborhood per
round, a property that we show is achievable using a new random coding
technique.
Technically, we provide efficient algorithms for fundamental distributed
tasks in additive networks, such as solving various symmetry breaking problems,
approximating network parameters, and solving an \emph{asymmetry revealing}
problem such as computing a maximal input.
The key method used is a novel random coding technique that allows a node to
successfully decode the received information, as long as it does not contain
too many distinct values. We then design our algorithms to produce a limited
amount of information in each neighborhood in order to leverage our enriched
toolbox for computing in additive networks
The Grail theorem prover: Type theory for syntax and semantics
As the name suggests, type-logical grammars are a grammar formalism based on
logic and type theory. From the prespective of grammar design, type-logical
grammars develop the syntactic and semantic aspects of linguistic phenomena
hand-in-hand, letting the desired semantics of an expression inform the
syntactic type and vice versa. Prototypical examples of the successful
application of type-logical grammars to the syntax-semantics interface include
coordination, quantifier scope and extraction.This chapter describes the Grail
theorem prover, a series of tools for designing and testing grammars in various
modern type-logical grammars which functions as a tool . All tools described in
this chapter are freely available
Distributed Approximation on Power Graphs
We investigate graph problems in the following setting: we are given a graph
and we are required to solve a problem on . While we focus mostly on
exploring this theme in the distributed CONGEST model, we show new results and
surprising connections to the centralized model of computation. In the CONGEST
model, it is natural to expect that problems on would be quite difficult
to solve efficiently on , due to congestion. However, we show that the
picture is both more complicated and more interesting.
Specifically, we encounter two phenomena acting in opposing directions: (i)
slowdown due to congestion and (ii) speedup due to structural properties of
.
We demonstrate these two phenomena via two fundamental graph problems,
namely, Minimum Vertex Cover (MVC) and Minimum Dominating Set (MDS). Among our
many contributions, the highlights are the following.
- In the CONGEST model, we show an -round
-approximation algorithm for MVC on , while no
-round algorithm is known for any better-than-2 approximation for MVC
on .
- We show a centralized polynomial time -approximation algorithm for MVC
on , whereas a better-than-2 approximation is UGC-hard for .
- In contrast, for MDS, in the CONGEST model, we show an
lower bound for a constant approximation factor for MDS
on , whereas an lower bound for MDS on is known only for
exact computation.
In addition to these highlighted results, we prove a number of other results
in the distributed CONGEST model including an lower bound
for computing an exact solution to MVC on , a conditional hardness result
for obtaining a -approximation to MVC on , and an -approximation to the MDS problem on in \mbox{poly}\log n
rounds.Comment: Appears in PODC 2020. 40 pages, 7 figure
Finite Automata for the Sub- and Superword Closure of CFLs: Descriptional and Computational Complexity
We answer two open questions by (Gruber, Holzer, Kutrib, 2009) on the
state-complexity of representing sub- or superword closures of context-free
grammars (CFGs): (1) We prove a (tight) upper bound of on
the size of nondeterministic finite automata (NFAs) representing the subword
closure of a CFG of size . (2) We present a family of CFGs for which the
minimal deterministic finite automata representing their subword closure
matches the upper-bound of following from (1).
Furthermore, we prove that the inequivalence problem for NFAs representing sub-
or superword-closed languages is only NP-complete as opposed to PSPACE-complete
for general NFAs. Finally, we extend our results into an approximation method
to attack inequivalence problems for CFGs
Model Adaptation with Synthetic and Real Data for Semantic Dense Foggy Scene Understanding
This work addresses the problem of semantic scene understanding under dense
fog. Although considerable progress has been made in semantic scene
understanding, it is mainly related to clear-weather scenes. Extending
recognition methods to adverse weather conditions such as fog is crucial for
outdoor applications. In this paper, we propose a novel method, named
Curriculum Model Adaptation (CMAda), which gradually adapts a semantic
segmentation model from light synthetic fog to dense real fog in multiple
steps, using both synthetic and real foggy data. In addition, we present three
other main stand-alone contributions: 1) a novel method to add synthetic fog to
real, clear-weather scenes using semantic input; 2) a new fog density
estimator; 3) the Foggy Zurich dataset comprising real foggy images,
with pixel-level semantic annotations for images with dense fog. Our
experiments show that 1) our fog simulation slightly outperforms a
state-of-the-art competing simulation with respect to the task of semantic
foggy scene understanding (SFSU); 2) CMAda improves the performance of
state-of-the-art models for SFSU significantly by leveraging unlabeled real
foggy data. The datasets and code are publicly available.Comment: final version, ECCV 201
Experimental Evidence for Quantum Structure in Cognition
We proof a theorem that shows that a collection of experimental data of
membership weights of items with respect to a pair of concepts and its
conjunction cannot be modeled within a classical measure theoretic weight
structure in case the experimental data contain the effect called
overextension. Since the effect of overextension, analogue to the well-known
guppy effect for concept combinations, is abundant in all experiments testing
weights of items with respect to pairs of concepts and their conjunctions, our
theorem constitutes a no-go theorem for classical measure structure for common
data of membership weights of items with respect to concepts and their
combinations. We put forward a simple geometric criterion that reveals the non
classicality of the membership weight structure and use experimentally measured
membership weights estimated by subjects in experiments to illustrate our
geometrical criterion. The violation of the classical weight structure is
similar to the violation of the well-known Bell inequalities studied in quantum
mechanics, and hence suggests that the quantum formalism and hence the modeling
by quantum membership weights can accomplish what classical membership weights
cannot do.Comment: 12 pages, 3 figure
Classical Logical versus Quantum Conceptual Thought: Examples in Economics, Decision theory and Concept Theory
Inspired by a quantum mechanical formalism to model concepts and their
disjunctions and conjunctions, we put forward in this paper a specific
hypothesis. Namely that within human thought two superposed layers can be
distinguished: (i) a layer given form by an underlying classical deterministic
process, incorporating essentially logical thought and its indeterministic
version modeled by classical probability theory; (ii) a layer given form under
influence of the totality of the surrounding conceptual landscape, where the
different concepts figure as individual entities rather than (logical)
combinations of others, with measurable quantities such as 'typicality',
'membership', 'representativeness', 'similarity', 'applicability', 'preference'
or 'utility' carrying the influences. We call the process in this second layer
'quantum conceptual thought', which is indeterministic in essence, and contains
holistic aspects, but is equally well, although very differently, organized
than logical thought. A substantial part of the 'quantum conceptual thought
process' can be modeled by quantum mechanical probabilistic and mathematical
structures. We consider examples of three specific domains of research where
the effects of the presence of quantum conceptual thought and its deviations
from classical logical thought have been noticed and studied, i.e. economics,
decision theory, and concept theories and which provide experimental evidence
for our hypothesis.Comment: 14 page
End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners
For human drivers, having rear and side-view mirrors is vital for safe
driving. They deliver a more complete view of what is happening around the car.
Human drivers also heavily exploit their mental map for navigation.
Nonetheless, several methods have been published that learn driving models with
only a front-facing camera and without a route planner. This lack of
information renders the self-driving task quite intractable. We investigate the
problem in a more realistic setting, which consists of a surround-view camera
system with eight cameras, a route planner, and a CAN bus reader. In
particular, we develop a sensor setup that provides data for a 360-degree view
of the area surrounding the vehicle, the driving route to the destination, and
low-level driving maneuvers (e.g. steering angle and speed) by human drivers.
With such a sensor setup we collect a new driving dataset, covering diverse
driving scenarios and varying weather/illumination conditions. Finally, we
learn a novel driving model by integrating information from the surround-view
cameras and the route planner. Two route planners are exploited: 1) by
representing the planned routes on OpenStreetMap as a stack of GPS coordinates,
and 2) by rendering the planned routes on TomTom Go Mobile and recording the
progression into a video. Our experiments show that: 1) 360-degree
surround-view cameras help avoid failures made with a single front-view camera,
in particular for city driving and intersection scenarios; and 2) route
planners help the driving task significantly, especially for steering angle
prediction.Comment: to be published at ECCV 201
Quantum Experimental Data in Psychology and Economics
We prove a theorem which shows that a collection of experimental data of
probabilistic weights related to decisions with respect to situations and their
disjunction cannot be modeled within a classical probabilistic weight structure
in case the experimental data contain the effect referred to as the
'disjunction effect' in psychology. We identify different experimental
situations in psychology, more specifically in concept theory and in decision
theory, and in economics (namely situations where Savage's Sure-Thing Principle
is violated) where the disjunction effect appears and we point out the common
nature of the effect. We analyze how our theorem constitutes a no-go theorem
for classical probabilistic weight structures for common experimental data when
the disjunction effect is affecting the values of these data. We put forward a
simple geometric criterion that reveals the non classicality of the considered
probabilistic weights and we illustrate our geometrical criterion by means of
experimentally measured membership weights of items with respect to pairs of
concepts and their disjunctions. The violation of the classical probabilistic
weight structure is very analogous to the violation of the well-known Bell
inequalities studied in quantum mechanics. The no-go theorem we prove in the
present article with respect to the collection of experimental data we consider
has a status analogous to the well known no-go theorems for hidden variable
theories in quantum mechanics with respect to experimental data obtained in
quantum laboratories. For this reason our analysis puts forward a strong
argument in favor of the validity of using a quantum formalism for modeling the
considered psychological experimental data as considered in this paper.Comment: 15 pages, 4 figure
- …