37,333 research outputs found
Inner and Outer Bounds for the Gaussian Cognitive Interference Channel and New Capacity Results
The capacity of the Gaussian cognitive interference channel, a variation of
the classical two-user interference channel where one of the transmitters
(referred to as cognitive) has knowledge of both messages, is known in several
parameter regimes but remains unknown in general. In this paper we provide a
comparative overview of this channel model as we proceed through our
contributions: we present a new outer bound based on the idea of a broadcast
channel with degraded message sets, and another series of outer bounds obtained
by transforming the cognitive channel into channels with known capacity. We
specialize the largest known inner bound derived for the discrete memoryless
channel to the Gaussian noise channel and present several simplified schemes
evaluated for Gaussian inputs in closed form which we use to prove a number of
results. These include a new set of capacity results for the a) "primary
decodes cognitive" regime, a subset of the "strong interference" regime that is
not included in the "very strong interference" regime for which capacity was
known, and for the b) "S-channel" in which the primary transmitter does not
interfere with the cognitive receiver. Next, for a general Gaussian cognitive
interference channel, we determine the capacity to within one bit/s/Hz and to
within a factor two regardless of channel parameters, thus establishing rate
performance guarantees at high and low SNR, respectively. We also show how
different simplified transmission schemes achieve a constant gap between inner
and outer bound for specific channels. Finally, we numerically evaluate and
compare the various simplified achievable rate regions and outer bounds in
parameter regimes where capacity is unknown, leading to further insight on the
capacity region of the Gaussian cognitive interference channel.Comment: submitted to IEEE transaction of Information Theor
Learning a Complete Image Indexing Pipeline
To work at scale, a complete image indexing system comprises two components:
An inverted file index to restrict the actual search to only a subset that
should contain most of the items relevant to the query; An approximate distance
computation mechanism to rapidly scan these lists. While supervised deep
learning has recently enabled improvements to the latter, the former continues
to be based on unsupervised clustering in the literature. In this work, we
propose a first system that learns both components within a unifying neural
framework of structured binary encoding
From surface dependencies towards deeper semantic representations [Semantic representations]
In the past, a divide could be seen between âdeepâ parsers on the one hand, which construct a semantic representation out of their input, but usually have significant coverage problems, and more robust parsers on the other hand, which are usually based on a (statistical) model derived from a treebank and have larger coverage, but leave the problem of semantic interpretation to the user. More recently, approaches have emerged that combine the robustness of datadriven (statistical) models with more detailed linguistic interpretation such that the output could be used for deeper semantic analysis. Cahill et al. (2002) use a PCFG-based parsing model in combination with a set of principles and heuristics to derive functional (f-)structures of Lexical-Functional Grammar (LFG). They show that the derived functional structures have a better quality than those generated by a parser based on a state-of-the-art hand-crafted LFG grammar. Advocates of Dependency Grammar usually point out that dependencies already are a semantically meaningful representation (cf. Menzel, 2003). However, parsers based on dependency grammar normally create underspecified representations with respect to certain phenomena such as coordination, apposition and control structures. In these areas they are too "shallow" to be directly used for semantic interpretation. In this paper, we adopt a similar approach to Cahill et al. (2002) using a dependency-based analysis to derive functional structure, and demonstrate the feasibility of this approach using German data. A major focus of our discussion is on the treatment of coordination and other potentially underspecified structures of the dependency data input. F-structure is one of the two core levels of syntactic representation in LFG (Bresnan, 2001). Independently of surface order, it encodes abstract syntactic functions that constitute predicate argument structure and other dependency relations such as subject, predicate, adjunct, but also further semantic information such as the semantic type of an adjunct (e.g. directional). Normally f-structure is captured as a recursive attribute value matrix, which is isomorphic to a directed graph representation. Figure 5 depicts an example target f-structure. As mentioned earlier, these deeper-level dependency relations can be used to construct logical forms as in the approaches of van Genabith and Crouch (1996), who construct underspecified discourse representations (UDRSs), and Spreyer and Frank (2005), who have robust minimal recursion semantics (RMRS) as their target representation. We therefore think that f-structures are a suitable target representation for automatic syntactic analysis in a larger pipeline of mapping text to interpretation. In this paper, we report on the conversion from dependency structures to fstructure. Firstly, we evaluate the f-structure conversion in isolation, starting from hand-corrected dependencies based on the TĂŒBa-D/Z treebank and Versley (2005)ÂŽs conversion. Secondly, we start from tokenized text to evaluate the combined process of automatic parsing (using Foth and Menzel (2006)ÂŽs parser) and f-structure conversion. As a test set, we randomly selected 100 sentences from TĂŒBa-D/Z which we annotated using a scheme very close to that of the TiGer Dependency Bank (Forst et al., 2004). In the next section, we sketch dependency analysis, the underlying theory of our input representations, and introduce four different representations of coordination. We also describe Weighted Constraint Dependency Grammar (WCDG), the dependency parsing formalism that we use in our experiments. Section 3 characterises the conversion of dependencies to f-structures. Our evaluation is presented in section 4, and finally, section 5 summarises our results and gives an overview of problems remaining to be solved
Collective treatment of High Energy Thresholds in SUSY - GUTs
Supersymmetric GUTs are the most natural extension of the Standard model
unifying electroweak and strong forces. Despite their indubitable virtues,
among these the gauge coupling unification and the quantization of the electric
charge, one of their shortcomings is the large number of parameters used to
describe the high energy thresholds (HET), which are hard to handle. We present
a new method according to which the effects of the HET, in any GUT model, can
be described by fewer parameters that are randomly produced from the original
set of the parameters of the model. In this way, regions favoured by the
experimental data are easier to locate, avoiding a detailed and time consuming
exploration of the parameter space, which is multidimensional even in the most
economic unifying schemes. To check the efficiency of this method, we directly
apply it to a SUSY SO(10) GUT model in which the doublet-triplet splitting is
realized through the Dimopoulos-Wilczek mechanism. We show that the demand of
gauge coupling unification, in conjunction with precision data, locates regions
of the parameter space in which values of the strong coupling \astrong are
within the experimental limits, along with a suppressed nucleon decay, mediated
by a higgsino driven dimension five operators, yielding lifetimes that are
comfortably above the current experimental bounds. These regions open up for
values of the SUSY breaking parameters m_0, M_1/2 < 1 TeV being therefore
accessible to LHC.Comment: 21 pages, 8 figures, UA-NPPS/BSM-10/02 (added
Learning a Complete Image Indexing Pipeline
To work at scale, a complete image indexing system comprises two components:
An inverted file index to restrict the actual search to only a subset that
should contain most of the items relevant to the query; An approximate distance
computation mechanism to rapidly scan these lists. While supervised deep
learning has recently enabled improvements to the latter, the former continues
to be based on unsupervised clustering in the literature. In this work, we
propose a first system that learns both components within a unifying neural
framework of structured binary encoding
- âŠ