271 research outputs found
Structural solutions to maximum independent set and related problems
In this thesis, we study some fundamental problems in algorithmic graph theory. Most
natural problems in this area are hard from a computational point of view. However,
many applications demand that we do solve such problems, even if they are intractable.
There are a number of methods in which we can try to do this:
1) We may use an approximation algorithm if we do not necessarily require the best
possible solution to a problem.
2) Heuristics can be applied and work well enough to be useful for many applications.
3) We can construct randomised algorithms for which the probability of failure is very
small.
4) We may parameterize the problem in some way which limits its complexity.
In other cases, we may also have some information about the structure of the
instances of the problem we are trying to solve. If we are lucky, we may and that we
can exploit this extra structure to find efficient ways to solve our problem. The question
which arises is - How far must we restrict the structure of our graph to be able to solve
our problem efficiently?
In this thesis we study a number of problems, such as Maximum Indepen-
dent Set, Maximum Induced Matching, Stable-II, Efficient Edge Domina-
tion, Vertex Colouring and Dynamic Edge-Choosability. We try to solve problems
on various hereditary classes of graphs and analyse the complexity of the resulting
problem, both from a classical and parameterized point of view
Mixing graph colourings
This thesis investigates some problems related to graph colouring, or, more precisely, graph re-colouring. Informally, the basic question addressed can be phrased as follows. Suppose one is given a graph G whose vertices can be properly k-coloured, for some k ≥ 2. Is it possible to transform any k-colouring of G into any other by recolouring vertices of G one at a time, making sure a proper k-colouring of G is always maintained? If the answer is in the affirmative, G is said to be k-mixing. The related problem of deciding whether, given two k-colourings of G, it is possible to transform one into the other by recolouring vertices one at a time, always maintaining a proper k-colouring of G, is also considered.
These questions can be considered as having a bearing on certain mathematical and ‘real-world’ problems. In particular, being able to recolour any colouring of a given graph to any other colouring is a necessary pre-requisite for the method of sampling colourings known as Glauber dynamics. The results presented in this thesis may also find application in the context of frequency reassignment: given that the problem of assigning radio frequencies in a wireless communications network is often modelled as a graph colouring problem, the task of re-assigning frequencies in such a network can be thought of as a graph recolouring problem.
Throughout the thesis, the emphasis is on the algorithmic aspects and the computational complexity of the questions described above. In other words, how easily, in terms of computational resources used, can they be answered? Strong results are obtained for the k = 3 case of the first question, where a characterisation theorem for 3-mixing graphs is given. For the second question, a dichotomy theorem for the complexity of the problem is proved: the problem is solvable in polynomial time for k ≤ 3 and PSPACE-complete for k ≥ 4. In addition, the possible length of a shortest sequence of recolourings between two colourings is investigated, and an interesting connection between the tractability of the problem and its underlying structure is established. Some variants of the above problems are also explored
Bayesian estimation of decomposable Gaussian graphical models
This thesis explains to statisticians what graphical models are and how to use them for statistical inference; in particular, how to use decomposable graphical models for efficient inference in covariance selection and multivariate regression problems. The first aim of the thesis is to show that decomposable graphical models are worth using within a Bayesian framework. The second aim is to make the techniques of graphical
models fully accessible to statisticians.
To achieve these aims the thesis makes a number of statistical contributions.
First, it proposes a new prior for decomposable graphs and a simulation methodology
for estimating this prior. Second, it proposes a number of Markov chain Monte
Carlo sampling schemes based on graphical techniques. The thesis also presents
some new graphical results, and some existing results are reproved to make them
more readily understood. Appendix 8.1 contains all the programs written to carry
out the inference discussed in the thesis, together with both a summary of the theory
on which they are based and a line by line description of how each routine works
Some recent developments on the Steklov eigenvalue problem
The Steklov eigenvalue problem, first introduced over 125 years ago, has seen
a surge of interest in the past few decades. This article is a tour of some of
the recent developments linking the Steklov eigenvalues and eigenfunctions of
compact Riemannian manifolds to the geometry of the manifolds. Topics include
isoperimetric-type upper and lower bounds on Steklov eigenvalues (first in the
case of surfaces and then in higher dimensions), stability and instability of
eigenvalues under deformations of the Riemannian metric, optimisation of
eigenvalues and connections to free boundary minimal surfaces in balls, inverse
problems and isospectrality, discretisation, and the geometry of
eigenfunctions. We begin with background material and motivating examples for
readers that are new to the subject. Throughout the tour, we frequently compare
and contrast the behavior of the Steklov spectrum with that of the Laplace
spectrum. We include many open problems in this rapidly expanding area.Comment: 157 pages, 7 figures. To appear in Revista Matem\'atica Complutens
Inference and experimental design for percolation and random graph models.
The problem of optimal arrangement of nodes of a random weighted graph is
studied in this thesis. The nodes of graphs under study are fixed, but their edges
are random and established according to the so called edge-probability function.
This function is assumed to depend on the weights attributed to the pairs of graph
nodes (or distances between them) and a statistical parameter. It is the purpose
of experimentation to make inference on the statistical parameter and thus to
extract as much information about it as possible. We also distinguish between two
different experimentation scenarios: progressive and instructive designs.
We adopt a utility-based Bayesian framework to tackle the optimal design
problem for random graphs of this kind. Simulation based optimisation methods,
mainly Monte Carlo and Markov Chain Monte Carlo, are used to obtain
the solution. We study optimal design problem for the inference based on partial
observations of random graphs by employing data augmentation technique.
We prove that the infinitely growing or diminishing node configurations asymptotically
represent the worst node arrangements. We also obtain the exact solution
to the optimal design problem for proximity graphs (geometric graphs) and numerical
solution for graphs with threshold edge-probability functions.
We consider inference and optimal design problems for finite clusters from bond
percolation on the integer lattice Zd and derive a range of both numerical and
analytical results for these graphs. We introduce inner-outer plots by deleting
some of the lattice nodes and show that the ‘mostly populated’ designs are not
necessarily optimal in the case of incomplete observations under both progressive
and instructive design scenarios.
Finally, we formulate a problem of approximating finite point sets with lattice
nodes and describe a solution to this problem
Reinforcing connectionism: learning the statistical way
Connectionism's main contribution to cognitive science will prove to be the renewed impetus it has imparted to learning. Learning can be integrated into the existing theoretical foundations of the subject, and the combination, statistical computational theories, provide a framework within which many connectionist mathematical mechanisms naturally fit. Examples from supervised and reinforcement learning demonstrate this. Statistical computational theories already exist for certainn associative matrix memories. This work is extended, allowing real valued synapses and arbitrarily biased inputs. It shows that a covariance learning rule optimises the signal/noise ratio, a measure of the potential quality of the memory, and quantifies the performance penalty incurred by other rules. In particular two that have been suggested as occuring naturally are shown to be asymptotically optimal in the limit of sparse coding. The mathematical model is justified in comparison with other treatments whose results differ. Reinforcement comparison is a way of hastening the learning of reinforcement learning systems in statistical environments. Previous theoretical analysis has not distinguished between different comparison terms, even though empirically, a covariance rule has been shown to be better than just a constant one. The workings of reinforcement comparison are investigated by a second order analysis of the expected statistical performance of learning, and an alternative rule is proposed and empirically justified. The existing proof that temporal difference prediction learning converges in the mean is extended from a special case involving adjacent time steps to the general case involving arbitary ones. The interaction between the statistical mechanism of temporal difference and the linear representation is particularly stark. The performance of the method given a linearly dependent representation is also analysed
- …