7,466 research outputs found
A co-designed equalization, modulation, and coding scheme
The commercial impact and technical success of Trellis Coded Modulation seems to illustrate that, if Shannon's capacity is going to be neared, the modulation and coding of an analogue signal ought to be viewed as an integrated process. More recent work has focused on going beyond the gains obtained for Average White Gaussian Noise and has tried to combine the coding/modulation with adaptive equalization. The motive is to gain similar advances on less perfect or idealized channels
Lazy Model Expansion: Interleaving Grounding with Search
Finding satisfying assignments for the variables involved in a set of
constraints can be cast as a (bounded) model generation problem: search for
(bounded) models of a theory in some logic. The state-of-the-art approach for
bounded model generation for rich knowledge representation languages, like ASP,
FO(.) and Zinc, is ground-and-solve: reduce the theory to a ground or
propositional one and apply a search algorithm to the resulting theory.
An important bottleneck is the blowup of the size of the theory caused by the
reduction phase. Lazily grounding the theory during search is a way to overcome
this bottleneck. We present a theoretical framework and an implementation in
the context of the FO(.) knowledge representation language. Instead of
grounding all parts of a theory, justifications are derived for some parts of
it. Given a partial assignment for the grounded part of the theory and valid
justifications for the formulas of the non-grounded part, the justifications
provide a recipe to construct a complete assignment that satisfies the
non-grounded part. When a justification for a particular formula becomes
invalid during search, a new one is derived; if that fails, the formula is
split in a part to be grounded and a part that can be justified.
The theoretical framework captures existing approaches for tackling the
grounding bottleneck such as lazy clause generation and grounding-on-the-fly,
and presents a generalization of the 2-watched literal scheme. We present an
algorithm for lazy model expansion and integrate it in a model generator for
FO(ID), a language extending first-order logic with inductive definitions. The
algorithm is implemented as part of the state-of-the-art FO(ID) Knowledge-Base
System IDP. Experimental results illustrate the power and generality of the
approach
Petri nets for systems and synthetic biology
We give a description of a Petri net-based framework for
modelling and analysing biochemical pathways, which uni¯es the qualita-
tive, stochastic and continuous paradigms. Each perspective adds its con-
tribution to the understanding of the system, thus the three approaches
do not compete, but complement each other. We illustrate our approach
by applying it to an extended model of the three stage cascade, which
forms the core of the ERK signal transduction pathway. Consequently
our focus is on transient behaviour analysis. We demonstrate how quali-
tative descriptions are abstractions over stochastic or continuous descrip-
tions, and show that the stochastic and continuous models approximate
each other. Although our framework is based on Petri nets, it can be
applied more widely to other formalisms which are used to model and
analyse biochemical networks
Lifeworld Analysis
We argue that the analysis of agent/environment interactions should be
extended to include the conventions and invariants maintained by agents
throughout their activity. We refer to this thicker notion of environment as a
lifeworld and present a partial set of formal tools for describing structures
of lifeworlds and the ways in which they computationally simplify activity. As
one specific example, we apply the tools to the analysis of the Toast system
and show how versions of the system with very different control structures in
fact implement a common control structure together with different conventions
for encoding task state in the positions or states of objects in the
environment.Comment: See http://www.jair.org/ for any accompanying file
Optical Flow Requires Multiple Strategies (but only one network)
We show that the matching problem that underlies optical flow requires
multiple strategies, depending on the amount of image motion and other factors.
We then study the implications of this observation on training a deep neural
network for representing image patches in the context of descriptor based
optical flow. We propose a metric learning method, which selects suitable
negative samples based on the nature of the true match. This type of training
produces a network that displays multiple strategies depending on the input and
leads to state of the art results on the KITTI 2012 and KITTI 2015 optical flow
benchmarks
Efficient Optimization for Rank-based Loss Functions
The accuracy of information retrieval systems is often measured using complex
loss functions such as the average precision (AP) or the normalized discounted
cumulative gain (NDCG). Given a set of positive and negative samples, the
parameters of a retrieval system can be estimated by minimizing these loss
functions. However, the non-differentiability and non-decomposability of these
loss functions does not allow for simple gradient based optimization
algorithms. This issue is generally circumvented by either optimizing a
structured hinge-loss upper bound to the loss function or by using asymptotic
methods like the direct-loss minimization framework. Yet, the high
computational complexity of loss-augmented inference, which is necessary for
both the frameworks, prohibits its use in large training data sets. To
alleviate this deficiency, we present a novel quicksort flavored algorithm for
a large class of non-decomposable loss functions. We provide a complete
characterization of the loss functions that are amenable to our algorithm, and
show that it includes both AP and NDCG based loss functions. Furthermore, we
prove that no comparison based algorithm can improve upon the computational
complexity of our approach asymptotically. We demonstrate the effectiveness of
our approach in the context of optimizing the structured hinge loss upper bound
of AP and NDCG loss for learning models for a variety of vision tasks. We show
that our approach provides significantly better results than simpler
decomposable loss functions, while requiring a comparable training time.Comment: 15 pages, 2 figure
- …