54 research outputs found
Constant Modulus Waveform Estimation and Interference Suppression via Two-stage Fractional Program-based Beamforming
In radar and communication systems, there exist a large class of signals with constant modulus property, including BPSK, QPSK, LFM, and phase-coded signals. In this paper, we focus on the problem of joint constant modulus waveform estimation and interference suppression from signals received at an antenna array. Instead of seeking a compromise between interference suppression and output noise power reduction by the Capon method or utilizing the interference direction (ID) prior to place perfect nulls at the IDs and subsequently minimize output noise power by the linearly constrained minimum variance (LCMV) beamformer, we devise a novel power ratio criterion, namely, interference-plus-noise-to-noise ratio (INNR) in the beamformer output to attain perfect interference nulling and minimal output noise power as in LCMV yet under the unknown ID case. A two-stage fractional program-based method is developed to jointly suppress the interferences and estimate the constant modulus waveform. In the first stage, we formulate an optimization model with a fractional objective function to minimize the INNR. Then, in the second stage, another fraction-constrained optimization problem is established to refine the weight vector from the solution space constrained by the INNR bound, to achieve approximately perfect nulls and minimum output noise power. Moreover, the solution is further extended to tackle the case with steering vector errors. Numerical results demonstrate the excellent performance of our methods
Global Inference for Sentence Compression: An Integer Linear Programming Approach
Sentence compression holds promise for many applications ranging from summarization to subtitle generation. Our work views sentence compression as an optimization problem and uses integer linear programming (ILP) to infer globally optimal compressions in the presence of linguistically motivated constraints. We show how previous formulations of sentence compression can be recast as ILPs and extend these models with novel global constraints. Experimental results on written and spoken texts demonstrate improvements over state-of-the-art models. 1
Recommended from our members
Applied Harmonic Analysis and Sparse Approximation
Efficiently analyzing functions, in particular multivariate functions, is a key problem in applied mathematics. The area of applied harmonic analysis has a significant impact on this problem by providing methodologies both for theoretical questions and for a wide range of applications in technology and science, such as image processing. Approximation theory, in particular the branch of the theory of sparse approximations, is closely intertwined with this area with a lot of recent exciting developments in the intersection of both. Research topics typically also involve related areas such as convex optimization, probability theory, and Banach space geometry. The workshop was the continuation of a first event in 2012 and intended to bring together world leading experts in these areas, to report on recent developments, and to foster new developments and collaborations
ReviewerGPT? An Exploratory Study on Using Large Language Models for Paper Reviewing
Given the rapid ascent of large language models (LLMs), we study the
question: (How) can large language models help in reviewing of scientific
papers or proposals? We first conduct some pilot studies where we find that (i)
GPT-4 outperforms other LLMs (Bard, Vicuna, Koala, Alpaca, LLaMa, Dolly,
OpenAssistant, StableLM), and (ii) prompting with a specific question (e.g., to
identify errors) outperforms prompting to simply write a review. With these
insights, we study the use of LLMs (specifically, GPT-4) for three tasks:
1. Identifying errors: We construct 13 short computer science papers each
with a deliberately inserted error, and ask the LLM to check for the
correctness of these papers. We observe that the LLM finds errors in 7 of them,
spanning both mathematical and conceptual errors.
2. Verifying checklists: We task the LLM to verify 16 closed-ended checklist
questions in the respective sections of 15 NeurIPS 2022 papers. We find that
across 119 {checklist question, paper} pairs, the LLM had an 86.6% accuracy.
3. Choosing the "better" paper: We generate 10 pairs of abstracts,
deliberately designing each pair in such a way that one abstract was clearly
superior than the other. The LLM, however, struggled to discern these
relatively straightforward distinctions accurately, committing errors in its
evaluations for 6 out of the 10 pairs.
Based on these experiments, we think that LLMs have a promising use as
reviewing assistants for specific reviewing tasks, but not (yet) for complete
evaluations of papers or proposals
Advanced Techniques for Future Multicarrier Systems
Future multicarrier systems face the tough challenge of supporting high data-rate and high-quality services. The main limitation is the frequency-selective nature of the propagation channel that affects the received signal, thus degrading the system performance.
OFDM can be envisaged as one of the most promising modulation techniques for future communication systems. It exhibits robustness to ISI even in very dispersive environments and its main characteristic is to take advantage of channel diversity by performing dynamic resource allocation. In a multi-user OFDMA scenario, the challenge is to allocate, on the basis of the channel knowledge, different portions of the available frequency spectrum among the users in the systems.
Literature on resource allocation for OFDMA systems mainly focused on single-cell systems, where the objective is to assign subcarriers, power and data-rate for each user according to a predetermined criterion. The problem can be formulated with the goal of either maximizing the system sum-rate subject to a constraint on transmitted power or minimizing the overall power consumption under some predetermined constraints on rate per user. Only recently, literature focuses on resource allocation in multi-cell networks, where the goal is not only to take advantage of frequency and multi-user diversity, but also to mitigate MAI, which represents one of the most limiting factor for such problems.
We consider a multi-cell OFDMA system with frequency reuse distance equal to one. Allowing all cells to transmit on the whole bandwidth unveils large potential gains in terms of spectral efficiency in comparison with conventional cellular systems. Such a scenario, however, is often deemed unfeasible because of the strong MAI that negatively affects the system performance. In this dissertation we present a layered architecture that integrates a packet scheduler with an adaptive resource allocator, explicitly designed to take care of the multiple access interference. Each cell performs its resource management in a distributed way without any central controller. Iterative resource allocation assigns radio channels to the users so as to minimize the interference. Packet scheduling guarantees that all users get a fair share of resources regardless of their position in the cell. This scheduler-allocator architecture integrates both goals and is able to self adapt to any traffic and user configuration. An adaptive, distributed load control strategy can reduce the cell load so that the iterative procedure always converges to a stable allocation, regardless of the interference. Numerical results show that the proposed architecture guarantees both high spectral efficiency and throughput fairness among flows.
In the second part of this dissertation we deal with FBMC communication systems. FBMC modulation is a valid alternative to conventional OFDM signaling as it presents a set of appealing characteristics, such as robustness to narrowband interferers, more flexibility to allocate groups of subchannels to different users/services, and frequency-domain equalization without any cyclic extension. However, like any other multicarrier modulations, FBMC is strongly affected by residual CFOs that have to be accurately estimated.
Unlike previously proposed algorithms, whereby frequency is recovered either relying on known pilot symbols multiplexed with the data stream or exploiting specific properties of the multicarrier signal structure following a blind approach, we present and discuss an algorithm based on the ML principle, which takes advantage both of pilot symbols and also indirectly of data symbols through knowledge and exploitation of their specific modulation format. The algorithm requires the availability of the statistical properties of channel fading up to second-order moments. It is shown that the above approach allows to improve on both frequency acquisition range and estimation accuracy of previously published schemes
Global Inference for Sentence Compression: An Integer Linear Programming Approach
Institute for Communicating and Collaborative SystemsIn this thesis we develop models for sentence compression. This text rewriting task
has recently attracted a lot of attention due to its relevance for applications (e.g., summarisation)
and simple formulation by means of word deletion. Previous models for
sentence compression have been inherently local and thus fail to capture the long range
dependencies and complex interactions involved in text rewriting. We present a solution
by framing the task as an optimisation problem with local and global constraints
and recast existing compression models into this framework. Using the constraints we
instil syntactic, semantic and discourse knowledge the models otherwise fail to capture.
We show that the addition of constraints allow relatively simple local models to
reach state-of-the-art performance for sentence compression.
The thesis provides a detailed study of sentence compression and its models. The
differences between automatic and manually created compression corpora are assessed
along with how compression varies across written and spoken text. We also discuss
various techniques for automatically and manually evaluating compression output
against a gold standard. Models are reviewed based on their assumptions, training requirements,
and scalability.
We introduce a general method for extending previous approaches to allow for
more global models. This is achieved through the optimisation framework of Integer
Linear Programming (ILP). We reformulate three compression models: an unsupervised
model, a semi-supervised model and a fully supervised model as ILP problems
and augment them with constraints. These constraints are intuitive for the compression
task and are both syntactically and semantically motivated. We demonstrate how they
improve compression quality and reduce the requirements on training material.
Finally, we delve into document compression where the task is to compress every
sentence of a document and use the resulting summary as a replacement for the
original document. For document-based compression we investigate discourse information
and its application to the compression task. Two discourse theories, Centering
and lexical chains, are used to automatically annotate documents. These annotations
are then used in our compression framework to impose additional constraints on the
resulting document. The goal is to preserve the discourse structure of the original document
and most of its content. We show how a discourse informed compression model
can outperform a discourse agnostic state-of-the-art model using a question answering
evaluation paradigm
Algorithms and techniques for polynomial matrix decompositions
The concept of polynomial matrices is introduced and the potential application of polynomial matrix decompositions is discussed within the general context of multi-channel digital signal processing. A recently developed technique, known as the second order sequential rotation algorithm (SBR2), for performing the eigenvalue decomposition of a para-Hermitian polynomial matrix (PEVD) is presented. The potential benefit of using the SBR2 algorithm to impose strong decorrelation on the signals received by a broadband sensor array is demonstrated by means of a suitable numerical simulation. This demonstrates how the polynomial matrices produced as a result of the PEVD can be of unnecessarily high order. This is undesirable for many practical applications and slows down the iterative computational procedure. An effective truncation technique for controlling the growth in order of these polynomial matrices is proposed. Depending on the choice of truncation parameters, it provides an excellent compromise between reduced order polynomial matrix factors and accuracy of the resulting decomposition. This is demonstrated by means of a set of numerical simulations performed by applying the modified SBR2 algorithm with a variety of truncation parameters to a representative set of test matrices. Three new polynomial matrix decompositions are then introduced - one for implementing a polynomial matrix QR decomposition (PQRD) and two for implementing a polynomial matrix singular value decomposition (PSVD). Several variants of the PQRD algorithm (including polynomial order reduction) are proposed and compared by numerical simulation using an appropriate set of test matrices. The most effective variant w.r.t. computational speed, order of the polynomial matrix factors and accuracy of the resulting decomposition is identified. The PSVD can be computed using either the PEVD technique, based on the SBR2 algorithm, or the new algorithm proposed for implementing the PQRD. These two approaches are also compared by means of computer simulations which demonstrate that the method based on the PQRD is numerically superior. The potential application of the preferred PQRD and PSVD algorithms to multiple input multiple output (MIMO) communications for the purpose of counteracting both co-channel interference and inter-symbol interference (multi-channel equalisation) is demonstrated in terms of reduced bit error rate by means of representative computer simulations
- …