70 research outputs found
Upper bounds on the growth rates of hard squares and related models via corner transfer matrices
We study the growth rate of the hard squares lattice gas, equivalent to the
number of independent sets on the square lattice, and two related models -
non-attacking kings and read-write isolated memory. We use an assortment of
techniques from combinatorics, statistical mechanics and linear algebra to
prove upper bounds on these growth rates. We start from Calkin and Wilf's
transfer matrix eigenvalue bound, then bound that with the Collatz-Wielandt
formula from linear algebra. To obtain an approximate eigenvector, we use an
ansatz from Baxter's corner transfer matrix formalism, optimised with Nishino
and Okunishi's corner transfer matrix renormalisation group method. This
results in an upper bound algorithm which no longer requires exponential memory
and so is much faster to calculate than a direct evaluation of the Calkin-Wilf
bound. Furthermore, it is extremely parallelisable and so allows us to make
dramatic improvements to the previous best known upper bounds. In all cases we
reduce the gap between upper and lower bounds by 4-6 orders of magnitude.Comment: Also submitted to FPSAC 2015 conferenc
Upper bounds on the growth rates of hard squares and related models via corner transfer matrices
International audienceWe study the growth rate of the hard squares lattice gas, equivalent to the number of independent sets on the square lattice, and two related models â non-attacking kings and read-write isolated memory. We use an assortment of techniques from combinatorics, statistical mechanics and linear algebra to prove upper bounds on these growth rates. We start from Calkin and Wilfâs transfer matrix eigenvalue bound, then bound that with the Collatz-Wielandt formula from linear algebra. To obtain an approximate eigenvector, we use an ansatz from Baxterâs corner transfer matrix formalism, optimised with Nishino and Okunishiâs corner transfer matrix renormalisation group method. This results in an upper bound algorithm which no longer requires exponential memory and so is much faster to calculate than a direct evaluation of the Calkin-Wilf bound. Furthermore, it is extremely parallelisable and so allows us to make dramatic improvements to the previous best known upper bounds. In all cases we reduce the gap between upper and lower bounds by 4-6 orders of magnitude.Nous Ă©tudions le taux de croissance du systĂšme de particules dur sur un rĂ©seau carrĂ©. Ce taux est Ă©quivalent au nombre dâensembles indĂ©pendants sur le rĂ©seau carrĂ©. Nous Ă©tudions Ă©galement deux modĂšles qui lui sont reliĂ©s : les rois non-attaquants et la mĂ©moire isolĂ©e dâĂ©criture-rĂ©Ă©criture. Nous utilisons techniques diverses issues de la combinatoire, de la mĂ©canique statistique et de lâalgĂšbre linĂ©aire pour prouver des bornes supĂ©rieures sur ces taux de croissances. Nous partons de la borne de Calkin et Wilf sur les valeurs propres des matrices de transfert, que nous bornons Ă lâaide de la formule de Collatz-Wielandt issue de lâalgĂšbre linĂ©aire. Pour obtenir une valeur approchĂ©e dâun vecteur propre, nous utilisons un ansatz du formalisme de Baxter sur les matrices de transfert de coin, que nous optimisons avec la mĂ©thode de Nishino et Okunishi qui exploite ces matrices. Il en rĂ©sulte un algorithme pour calculer la borne supĂ©rieure qui nâest plus exponentiel en mĂ©moire et est ainsi beaucoup plus rapide quâune Ă©valuation directe de la borne de Calkin-Wilf. De plus, cet algorithme est extrĂȘmement parallĂ©lisable et permet ainsi une nette amĂ©lioration des meilleurs bornes supĂ©rieures existantes. Dans tous les cas lâĂ©cart entre les bornes supĂ©rieures et infĂ©rieures sâen trouve rĂ©duit de 4 Ă 6 ordres de grandeur
Towards a better approximation for sparsest cut?
We give a new -approximation for sparsest cut problem on graphs
where small sets expand significantly more than the sparsest cut (sets of size
expand by a factor bigger, for some small ; this
condition holds for many natural graph families). We give two different
algorithms. One involves Guruswami-Sinop rounding on the level- Lasserre
relaxation. The other is combinatorial and involves a new notion called {\em
Small Set Expander Flows} (inspired by the {\em expander flows} of ARV) which
we show exists in the input graph. Both algorithms run in time . We also show similar approximation algorithms in graphs with
genus with an analogous local expansion condition. This is the first
algorithm we know of that achieves -approximation on such general
family of graphs
Advanced Algebraic Concepts for Efficient Multi-Channel Signal Processing
ï»żUnsere moderne Gesellschaft ist Zeuge eines fundamentalen Wandels in der Art und Weise
wie wir mit Technologie interagieren. GerĂ€te werden zunehmend intelligenter - sie verfĂŒgen
ĂŒber mehr und mehr Rechenleistung und hĂ€ufiger ĂŒber eigene Kommunikationsschnittstellen.
Das beginnt bei einfachen HaushaltsgerĂ€ten und reicht ĂŒber Transportmittel bis zu groĂen
ĂŒberregionalen Systemen wie etwa dem Stromnetz. Die Erfassung, die Verarbeitung und der
Austausch digitaler Informationen gewinnt daher immer mehr an Bedeutung. Die Tatsache,
dass ein wachsender Anteil der GerÀte heutzutage mobil und deshalb batteriebetrieben ist,
begrĂŒndet den Anspruch, digitale Signalverarbeitungsalgorithmen besonders effizient zu gestalten.
Dies kommt auch dem Wunsch nach einer Echtzeitverarbeitung der groĂen anfallenden
Datenmengen zugute.
Die vorliegende Arbeit demonstriert Methoden zum Finden effizienter algebraischer Lösungen
fĂŒr eine Vielzahl von Anwendungen mehrkanaliger digitaler Signalverarbeitung. Solche AnsĂ€tze
liefern nicht immer unbedingt die bestmögliche Lösung, kommen dieser jedoch hÀufig recht
nahe und sind gleichzeitig bedeutend einfacher zu beschreiben und umzusetzen. Die einfache
Beschreibungsform ermöglicht eine tiefgehende Analyse ihrer LeistungsfĂ€higkeit, was fĂŒr den
Entwurf eines robusten und zuverlÀssigen Systems unabdingbar ist. Die Tatsache, dass sie nur
gebrĂ€uchliche algebraische Hilfsmittel benötigen, erlaubt ihre direkte und zĂŒgige Umsetzung
und den Test unter realen Bedingungen.
Diese Grundidee wird anhand von drei verschiedenen Anwendungsgebieten demonstriert.
ZunÀchst wird ein semi-algebraisches Framework zur Berechnung der kanonisch polyadischen
(CP) Zerlegung mehrdimensionaler Signale vorgestellt. Dabei handelt es sich um ein sehr
grundlegendes Werkzeug der multilinearen Algebra mit einem breiten Anwendungsspektrum
von Mobilkommunikation ĂŒber Chemie bis zur Bildverarbeitung. Verglichen mit existierenden
iterativen Lösungsverfahren bietet das neue Framework die Möglichkeit, den Rechenaufwand
und damit die GĂŒte der erzielten Lösung zu steuern. Es ist auĂerdem weniger anfĂ€llig gegen eine
schlechte Konditionierung der Ausgangsdaten. Das zweite Gebiet, das in der Arbeit besprochen
wird, ist die unterraumbasierte hochauflösende ParameterschĂ€tzung fĂŒr mehrdimensionale Signale,
mit Anwendungsgebieten im RADAR, der Modellierung von Wellenausbreitung, oder
bildgebenden Verfahren in der Medizin. Es wird gezeigt, dass sich derartige mehrdimensionale
Signale mit Tensoren darstellen lassen. Dies erlaubt eine natĂŒrlichere Beschreibung und eine
bessere Ausnutzung ihrer Struktur als das mit Matrizen möglich ist. Basierend auf dieser Idee
entwickeln wir eine tensor-basierte SchÀtzung des Signalraums, welche genutzt werden kann
um beliebige existierende Matrix-basierte Verfahren zu verbessern. Dies wird im Anschluss
exemplarisch am Beispiel der ESPRIT-artigen Verfahren gezeigt, fĂŒr die verbesserte Versionen
vorgeschlagen werden, die die mehrdimensionale Struktur der Daten (Tensor-ESPRIT),
nichzirkulÀre Quellsymbole (NC ESPRIT), sowie beides gleichzeitig (NC Tensor-ESPRIT) ausnutzen.
Um die endgĂŒltige SchĂ€tzgenauigkeit objektiv einschĂ€tzen zu können wird dann ein
Framework fĂŒr die analytische Beschreibung der LeistungsfĂ€higkeit beliebiger ESPRIT-artiger
Algorithmen diskutiert. Verglichen mit existierenden analytischen AusdrĂŒcken ist unser Ansatz
allgemeiner, da keine Annahmen ĂŒber die statistische Verteilung von Nutzsignal und
Rauschen benötigt werden und die Anzahl der zur VerfĂŒgung stehenden SchnappschĂŒsse beliebig
klein sein kann. Dies fĂŒhrt auf vereinfachte AusdrĂŒcke fĂŒr den mittleren quadratischen
SchĂ€tzfehler, die Schlussfolgerungen ĂŒber die Effizienz der Verfahren unter verschiedenen Bedingungen
zulassen. Das dritte Anwendungsgebiet ist der bidirektionale Datenaustausch mit
Hilfe von Relay-Stationen. Insbesondere liegt hier der Fokus auf Zwei-Wege-Relaying mit Hilfe
von Amplify-and-Forward-Relays mit mehreren Antennen, da dieser Ansatz ein besonders gutes
Kosten-Nutzen-VerhÀltnis verspricht. Es wird gezeigt, dass sich die nötige Kanalkenntnis
mit einem einfachen algebraischen Tensor-basierten SchĂ€tzverfahren gewinnen lĂ€sst. AuĂerdem
werden Verfahren zum Finden einer gĂŒnstigen Relay-VerstĂ€rkungs-Strategie diskutiert. Bestehende
AnsÀtze basieren entweder auf komplexen numerischen Optimierungsverfahren oder auf
Ad-Hoc-AnsÀtzen die keine zufriedenstellende Bitfehlerrate oder Summenrate liefern. Deshalb
schlagen wir algebraische AnsÀtze zum Finden der RelayverstÀrkungsmatrix vor, die von relevanten
Systemmetriken inspiriert sind und doch einfach zu berechnen sind. Wir zeigen das
algebraische ANOMAX-Verfahren zum Erreichen einer niedrigen Bitfehlerrate und seine Modifikation
RR-ANOMAX zum Erreichen einer hohen Summenrate. FĂŒr den Spezialfall, in dem
die EndgerÀte nur eine Antenne verwenden, leiten wir eine semi-algebraische Lösung zum
Finden der Summenraten-optimalen Strategie (RAGES) her. Anhand von numerischen Simulationen
wird die LeistungsfĂ€higkeit dieser Verfahren bezĂŒglich Bitfehlerrate und erreichbarer
Datenrate bewertet und ihre EffektivitÀt gezeigt.Modern society is undergoing a fundamental change in the way we interact with technology.
More and more devices are becoming "smart" by gaining advanced computation capabilities
and communication interfaces, from household appliances over transportation systems to large-scale
networks like the power grid. Recording, processing, and exchanging digital information
is thus becoming increasingly important. As a growing share of devices is nowadays mobile
and hence battery-powered, a particular interest in efficient digital signal processing techniques
emerges.
This thesis contributes to this goal by demonstrating methods for finding efficient algebraic
solutions to various applications of multi-channel digital signal processing. These may not
always result in the best possible system performance. However, they often come close while
being significantly simpler to describe and to implement. The simpler description facilitates a
thorough analysis of their performance which is crucial to design robust and reliable systems.
The fact that they rely on standard algebraic methods only allows their rapid implementation
and test under real-world conditions.
We demonstrate this concept in three different application areas. First, we present a semi-algebraic
framework to compute the Canonical Polyadic (CP) decompositions of multidimensional
signals, a very fundamental tool in multilinear algebra with applications ranging from
chemistry over communications to image compression. Compared to state-of-the art iterative
solutions, our framework offers a flexible control of the complexity-accuracy trade-off and
is less sensitive to badly conditioned data. The second application area is multidimensional
subspace-based high-resolution parameter estimation with applications in RADAR, wave propagation
modeling, or biomedical imaging. We demonstrate that multidimensional signals can
be represented by tensors, providing a convenient description and allowing to exploit the
multidimensional structure in a better way than using matrices only. Based on this idea,
we introduce the tensor-based subspace estimate which can be applied to enhance existing
matrix-based parameter estimation schemes significantly. We demonstrate the enhancements
by choosing the family of ESPRIT-type algorithms as an example and introducing enhanced
versions that exploit the multidimensional structure (Tensor-ESPRIT), non-circular source
amplitudes (NC ESPRIT), and both jointly (NC Tensor-ESPRIT). To objectively judge the
resulting estimation accuracy, we derive a framework for the analytical performance assessment
of arbitrary ESPRIT-type algorithms by virtue of an asymptotical first order perturbation
expansion. Our results are more general than existing analytical results since we do not need
any assumptions about the distribution of the desired signal and the noise and we do not
require the number of samples to be large. At the end, we obtain simplified expressions for the
mean square estimation error that provide insights into efficiency of the methods under various
conditions. The third application area is bidirectional relay-assisted communications. Due to
its particularly low complexity and its efficient use of the radio resources we choose two-way
relaying with a MIMO amplify and forward relay. We demonstrate that the required channel
knowledge can be obtained by a simple algebraic tensor-based channel estimation scheme. We
also discuss the design of the relay amplification matrix in such a setting. Existing approaches
are either based on complicated numerical optimization procedures or on ad-hoc solutions
that to not perform well in terms of the bit error rate or the sum-rate. Therefore, we propose
algebraic solutions that are inspired by these performance metrics and therefore perform well
while being easy to compute. For the MIMO case, we introduce the algebraic norm maximizing
(ANOMAX) scheme, which achieves a very low bit error rate, and its extension Rank-Restored
ANOMAX (RR-ANOMAX) that achieves a sum-rate close to an upper bound. Moreover, for
the special case of single antenna terminals we derive the semi-algebraic RAGES scheme which
finds the sum-rate optimal relay amplification matrix based on generalized eigenvectors. Numerical
simulations evaluate the resulting system performance in terms of bit error rate and
system sum rate which demonstrates the effectiveness of the proposed algebraic solutions
Quantitative analysis of algorithms for compressed signal recovery
Compressed Sensing (CS) is an emerging paradigm in which signals are recovered from undersampled
nonadaptive linear measurements taken at a rate proportional to the signal's true
information content as opposed to its ambient dimension. The resulting problem consists in finding a sparse solution to an underdetermined system of linear equations. It has now been
established, both theoretically and empirically, that certain optimization algorithms are able
to solve such problems. Iterative Hard Thresholding (IHT) (Blumensath and Davies, 2007),
which is the focus of this thesis, is an established CS recovery algorithm which is known to
be effective in practice, both in terms of recovery performance and computational efficiency.
However, theoretical analysis of IHT to date suffers from two drawbacks: state-of-the-art worst-case
recovery conditions have not yet been quantified in terms of the sparsity/undersampling
trade-off, and also there is a need for average-case analysis in order to understand the behaviour
of the algorithm in practice.
In this thesis, we present a new recovery analysis of IHT, which considers the fixed points of
the algorithm. In the context of arbitrary matrices, we derive a condition guaranteeing convergence
of IHT to a fixed point, and a condition guaranteeing that all fixed points are 'close' to
the underlying signal. If both conditions are satisfied, signal recovery is therefore guaranteed.
Next, we analyse these conditions in the case of Gaussian measurement matrices, exploiting
the realistic average-case assumption that the underlying signal and measurement matrix are
independent. We obtain asymptotic phase transitions in a proportional-dimensional framework,
quantifying the sparsity/undersampling trade-off for which recovery is guaranteed. By generalizing
the notion of xed points, we extend our analysis to the variable stepsize Normalised IHT
(NIHT) (Blumensath and Davies, 2010). For both stepsize schemes, comparison with previous
results within this framework shows a substantial quantitative improvement.
We also extend our analysis to a related algorithm which exploits the assumption that the
underlying signal exhibits tree-structured sparsity in a wavelet basis (Baraniuk et al., 2010).
We obtain recovery conditions for Gaussian matrices in a simplified proportional-dimensional
asymptotic, deriving bounds on the oversampling rate relative to the sparsity for which recovery
is guaranteed. Our results, which are the first in the phase transition framework for tree-based
CS, show a further significant improvement over results for the standard sparsity model. We
also propose a dynamic programming algorithm which is guaranteed to compute an exact tree
projection in low-order polynomial time
Integrality and cutting planes in semidefinite programming approaches for combinatorial optimization
Many real-life decision problems are discrete in nature. To solve such problems as mathematical optimization problems, integrality constraints are commonly incorporated in the model to reflect the choice of finitely many alternatives. At the same time, it is known that semidefinite programming is very suitable for obtaining strong relaxations of combinatorial optimization problems. In this dissertation, we study the interplay between semidefinite programming and integrality, where a special focus is put on the use of cutting-plane methods. Although the notions of integrality and cutting planes are well-studied in linear programming, integer semidefinite programs (ISDPs) are considered only recently. We show that manycombinatorial optimization problems can be modeled as ISDPs. Several theoretical concepts, such as the ChvĂĄtal-Gomory closure, total dual integrality and integer Lagrangian duality, are studied for the case of integer semidefinite programming. On the practical side, we introduce an improved branch-and-cut approach for ISDPs and a cutting-plane augmented Lagrangian method for solving semidefinite programs with a large number of cutting planes. Throughout the thesis, we apply our results to a wide range of combinatorial optimization problems, among which the quadratic cycle cover problem, the quadratic traveling salesman problem and the graph partition problem. Our approaches lead to novel, strong and efficient solution strategies for these problems, with the potential to be extended to other problem classes
- âŠ