22 research outputs found
Conference Program
Document provides a list of the sessions, speakers, workshops, and committees of the 32nd Summer Conference on Topology and Its Applications
Exact algorithms for network design problems using graph orientations
Gegenstand dieser Dissertation sind exakte Lösungsverfahren für topologische Netzwerkdesignprobleme.
Diese kombinatorischen Optimierungsprobleme tauchen in unterschiedlichen
realen Anwendungen auf, wie z.B. in der Telekommunikation und
der Energiewirtschaft. Die grundlegende Problemstellung dabei ist die Planung bzw.
der Ausbau von Netzwerken, die Kunden durch physikalische Leitungen miteinander
verbinden. Im Allgemeinen lassen sich solche Probleme graphentheoretisch wie folgt
beschreiben: Gegeben eine Menge von Knoten (Kunden, StraĂźenkreuzungen, Router
u.s.w.), eine Menge von Kanten (potenzielle Verbindungsmöglichkeiten) und eine
Kostenfunktion auf den Kanten und/oder Knoten. Zu bestimmen ist eine Teilmenge
von Knoten und Kanten, so dass die Kostensumme der gewählten Elemente minimiert
wird und dabei Nebenbedingungen wie Zusammenhang, Ausfallsicherheit,
Kardinalität o.ä. erfüllt werden. In dieser Dissertation behandeln wir zwei spezielle
Klassen von topologischen Netzwerkdesignproblemen, nämlich das k-Cardinality
Tree Problem (KCT) und das {0,1,2}-Survivable Netzwerkdesignproblem ({0,1,2}-
SND) mit Knotenzusammenhang. Diese Probleme sind im Allgemeinen NP-schwer,
d.h. nach derzeitigem Stand der Forschung kann es fĂĽr solche Probleme keine Algorithmen
geben die eine optimale Lösung berechnen und dabei für jede mögliche
Instanz eine effiziente (d.h. polynomielle) Laufzeit garantieren.
Die oben genannten Probleme lassen sich als ganzzahlige lineare Programme
(ILPs) formulieren, d.h. als Systeme aus linearen Ungleichungen, ganzzahligen Variablen
und einer linearen Zielfunktion. Solche Modelle lassen sich mit Methoden
der sogenannten mathematischen Programmierung lösen. Dass die entsprechenden
Lösungsverfahren im Allgemeinen sehr zeitaufwendig sein können, war ein oft genutztes
Argument für die Entwicklung von (Meta-)Heuristiken um schnell eine Lösung
zu erhalten, wenn auch auf Kosten der Optimalität. In dieser Dissertation zeigen
wir, dass es, unter Ausnutzung gewisser graphentheoretischer Eigenschaften der
zulässigen Lösungen, durchaus möglich ist große anwendungsnahe Probleminstanzen
der von uns betrachteten Probleme beweisbar optimal und praktisch-effizient
zu lösen. Basierend auf Orientierungseigenschaften der optimalen Lösungen, formulieren
wir neue, beweisbar stärkere ILPs und lösen diese anschließend mit Hilfe
maĂźgeschneiderter Branch-and-Cut Algorithmen. Durch umfangreiche polyedrische
Analysen können wir beweisen, dass diese Modelle einerseits formal stärkere Beschreibungen
der Lösungsräume liefern als bisher bekannte Modelle und andererseits
fĂĽr Branch-and-Cut Verfahren viele praktische Vorteile besitzen. Im Kontext des
{0,1,2}-SND geben wir zum ersten Mal eine Orientierungseigenschaft zweiknotenzusammenhängender Graphen an, die zu einer beweisbar stärkeren ILP-Formulierung führt und lösen damit ein in der Literatur seit langem offenes Problem. Unsere
experimentellen Ergebnisse für beide Problemklassen zeigen, dass während noch
vor kurzem nur Instanzen mit weniger als 200 Knoten in annehmbarer Zeit berechnet
werden konnten unsere Algorithmen das optimale Lösen von Instanzen mit
mehreren tausend Knoten erlauben. Insbesondere fĂĽr das KCT Problem ist unser
exaktes Verfahren oft sogar schneller als moderne Metaheuristiken, die i.d.R. keine
optimale Lösungen finden.The subject of this thesis are exact solution strategies for topological network design
problems. These combinatorial optimization problems arise in various real-world
scenarios, as, e.g., in the telecommunication and energy industries. The prime task
thereby is to plan or extend networks, physically connecting customers. In general
we can describe such problems graph-theoretically as follows: Given a set of nodes
(customers, street crossings, routers, etc.), a set of edges (potential connections, e.g.,
cables), and a cost function on the edges and/or nodes. We ask for a subset of nodes
and edges, such that the sum of the costs of the selected elements is minimized while
satisfying side-conditions as, e.g., connectivity, reliability, or cardinality. In this
thesis we concentrate on two special classes of topological network design problems:
the k-cardinality tree problem (KCT) and the f0,1,2g-survivable network design
problem (f0,1,2g-SND) with node-connectivity constraints. These problems are in
general NP-hard, i.e., according to the current knowledge, it is very unlikely that
optimal solutions can be found efficiently (i.e., in polynomial time) for all possible
problem instances.
The above problems can be formulated as integer linear programs (ILPs), i.e.,
as systems of linear inequalities, integral variables, and a linear objective function.
Such models can be solved using methods of mathematical programming. Generally,
the corresponding solutions methods can be very time-consuming. This was
often used as an argument for developing (meta-)heuristics to obtain solutions fast,
although at the cost of their optimality. However, in this thesis we show that, exploiting
certain graph-theoretic properties of the feasible solutions, we are able to
solve large real-world problem instances to provable optimality efficiently in practice.
Based on orientation properties of optimal solutions we formulate new, provably
stronger ILPs and solve them via specially tailored branch-and-cut algorithms.
Our extensive polyhedral analyses show that these models give tighter descriptions
of the solution spaces and also offer certain algorithmic advantages in practice. In
the context of f0,1,2g-SND we are able to present the first orientation property
of 2-node-connected graphs which leads to a provably stronger ILP formulation,
thereby answering a long standing open research question. Until recently, both our
problem classes allowed optimal solutions only for instances with roughly up to 200
nodes. Our experimental results show that our new approaches allow instances with
thousands of nodes. Especially for the KCT problem, our exact method is often
even faster than state-of-the-art metaheuristics, which usually do not find optimal
solutions
Overcoming Vulnerability in the Life Course - Reflections on a Research Program
This chapter reflects on the twelve-year Swiss research program, “Overcoming vulnerability: Life course perspectives” (LIVES). The authors are longstanding members of its scientific advisory committee. They highlight the program’s major accomplishments, identify key ingredients of the program’s success as well as some of its challenges, and raise promising avenues for future scholarship. Their insights will be of particular interest to those who wish to launch similar large-scale collaborative enterprises. LIVES has been a landmark project in advancing the conceptualization, measurement, and analysis of vulnerability over the life course. The foundation it has provided will direct the next era of scholarship toward even greater specificity: in understanding the conditions under which vulnerability matters, for whom, when, and how. In a process-oriented life-course perspective, vulnerability is not viewed as a persistent or permanent condition but rather as a dormant condition of the social actor, activated in particular situations and contexts
Recherche de structure dans un graphe aléatoire : modèles à espace latent
.This thesis addresses the clustering of the nodes of a graph, in the framework of randommodels with latent variables. To each node i is allocated an unobserved (latent) variable Zi and the probability of nodes i and j being connected depends conditionally on Zi and Zj . Unlike Erdos-Renyi's model, connections are not independent identically distributed; the latent variables rule the connection distribution of the nodes. These models are thus heterogeneous and their structure is fully described by the latent variables and their distribution. Hence we aim at infering them from the graph, which the only observed data.In both original works of this thesis, we propose consistent inference methods with a computational cost no more than linear with respect to the number of nodes or edges, so that large graphs can be processed in a reasonable time. They both are based on a study of the distribution of the degrees, which are normalized in a convenient way for the model.The first work deals with the Stochastic Blockmodel. We show the consistency of an unsupervised classiffcation algorithm using concentration inequalities. We deduce from it a parametric estimation method, a model selection method for the number of latent classes, and a clustering test (testing whether there is one cluster or more), which are all proved to be consistent. In the second work, the latent variables are positions in the ℝd space, having a density f. The connection probability depends on the distance between the node positions. The clusters are defined as connected components of some level set of f. The goal is to estimate the number of such clusters from the observed graph only. We estimate the density at the latent positions of the nodes with their degree, which allows to establish a link between clusters and connected components of some subgraphs of the observed graph, obtained by removing low degree nodes. In particular, we thus derive an estimator of the cluster number and we also show the consistency in some sense.Cette thèse aborde le problème de la recherche d'une structure (ou clustering) dans lesnoeuds d'un graphe. Dans le cadre des modèles aléatoires à variables latentes, on attribue à chaque noeud i une variable aléatoire non observée (latente) Zi, et la probabilité de connexion des noeuds i et j dépend conditionnellement de Zi et Zj . Contrairement au modèle d'Erdos-Rényi, les connexions ne sont pas indépendantes identiquement distribuées; les variables latentes régissent la loi des connexions des noeuds. Ces modèles sont donc hétérogènes, et leur structure est décrite par les variables latentes et leur loi; ce pourquoi on s'attache à en faire l'inférence à partir du graphe, seule variable observée.La volonté commune des deux travaux originaux de cette thèse est de proposer des méthodes d'inférence de ces modèles, consistentes et de complexité algorithmique au plus linéaire en le nombre de noeuds ou d'arêtes, de sorte à pouvoir traiter de grands graphes en temps raisonnable. Ils sont aussi tous deux fondés sur une étude fine de la distribution des degrés, normalisés de façon convenable selon le modèle.Le premier travail concerne le Stochastic Blockmodel. Nous y montrons la consistence d'un algorithme de classiffcation non supervisée à l'aide d'inégalités de concentration. Nous en déduisons une méthode d'estimation des paramètres, de sélection de modèles pour le nombre de classes latentes, et un test de la présence d'une ou plusieurs classes latentes (absence ou présence de clustering), et nous montrons leur consistence.Dans le deuxième travail, les variables latentes sont des positions dans l'espace ℝd, admettant une densité f, et la probabilité de connexion dépend de la distance entre les positions des noeuds. Les clusters sont définis comme les composantes connexes de l'ensemble de niveau t > 0 fixé de f, et l'objectif est d'en estimer le nombre à partir du graphe. Nous estimons la densité en les positions latentes des noeuds grâce à leur degré, ce qui permet d'établir une correspondance entre les clusters et les composantes connexes de certains sous-graphes du graphe observé, obtenus en retirant les nœuds de faible degré. En particulier, nous en déduisons un estimateur du nombre de clusters et montrons saconsistence en un certain sen
Blockmodeling Techniques for Complex Networks.
The class of network models known as stochastic blockmodels has recently been gaining popularity. In this dissertation, we present new work that uses blockmodels to answer questions about networks. We create a blockmodel based on the idea of link communities, which naturally gives rise to overlapping vertex communities. We derive a fast and accurate algorithm to fit the model to networks. This model can be related to another blockmodel, which allows the method to efficiently find nonoverlapping communities as well. We then create a heuristic based on the link community model whose use is to find the correct number of communities in a network. The heuristic is based on intuitive corrections to likelihood ratio tests. It does a good job finding the correct number of communities in both real networks and synthetic networks generated from the link communities model. Two commonly studied types of networks are citation networks, where research papers cite other papers, and coauthorship networks, where authors are connected if they've written a paper together. We study a multi-modal network from a large dataset of Physics publications that is the combination of the two, allowing for directed links between papers as citations, and an undirected edge between a scientist and a paper if they helped to write it. This allows for new insights on the relation between social interaction and scientific production. We also have the publication dates of papers, which lets us track our measures over time. Finally, we create a stochastic model for ranking vertices in a semi-directed network. The probability of connection between two vertices depends on the difference of their ranks. When this model is fit to high school friendship networks, the ranks appear to correspond with a measure of social status. Students have reciprocated and some unreciprocated edges with other students of closely similar rank that correspond to true friendship, and claim an aspirational friendship with a much higher ranked individual a fraction of the time. In general, students with more friends have higher ranks than those with fewer friends, and older students have higher ranks than younger students.PhDPhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/108855/1/briball_1.pd
The relationship between business and society : towards a context-and culture-sensitive approach for the 21st century
This dissertation investigates the nexus between business and society, with a special focus on corporate sustainability perspectives within and between national contexts. Based on a between-country comparison, the dissertation presents an analytic framework suitable to identifying opportunities and challenges related to expectations towards corporations by mapping, comparing and contrasting the perspectives of different stakeholders on issues of corporate sustainability
An online adaptive learning algorithm for optimal trade execution in high-frequency markets
A thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy
in the Faculty of Science, School of Computer Science and Applied Mathematics
University of the Witwatersrand. October 2016.Automated algorithmic trade execution is a central problem in modern financial markets,
however finding and navigating optimal trajectories in this system is a non-trivial
task. Many authors have developed exact analytical solutions by making simplifying
assumptions regarding governing dynamics, however for practical feasibility and robustness,
a more dynamic approach is needed to capture the spatial and temporal system
complexity and adapt as intraday regimes change.
This thesis aims to consolidate four key ideas: 1) the financial market as a complex
adaptive system, where purposeful agents with varying system visibility collectively and
simultaneously create and perceive their environment as they interact with it; 2) spin
glass models as a tractable formalism to model phenomena in this complex system; 3) the
multivariate Hawkes process as a candidate governing process for limit order book events;
and 4) reinforcement learning as a framework for online, adaptive learning. Combined
with the data and computational challenges of developing an efficient, machine-scale
trading algorithm, we present a feasible scheme which systematically encodes these ideas.
We first determine the efficacy of the proposed learning framework, under the conjecture
of approximate Markovian dynamics in the equity market. We find that a simple lookup
table Q-learning algorithm, with discrete state attributes and discrete actions, is able
to improve post-trade implementation shortfall by adapting a typical static arrival-price
volume trajectory with respect to prevailing market microstructure features streaming
from the limit order book.
To enumerate a scale-specific state space whilst avoiding the curse of dimensionality, we
propose a novel approach to detect the intraday temporal financial market state at each
decision point in the Q-learning algorithm, inspired by the complex adaptive system
paradigm. A physical analogy to the ferromagnetic Potts model at thermal equilibrium
is used to develop a high-speed maximum likelihood clustering algorithm, appropriate
for measuring critical or near-critical temporal states in the financial system. State
features are studied to extract time-scale-specific state signature vectors, which serve as
low-dimensional state descriptors and enable online state detection.
To assess the impact of agent interactions on the system, a multivariate Hawkes process is
used to measure the resiliency of the limit order book with respect to liquidity-demand
events of varying size. By studying the branching ratios associated with key quote
replenishment intensities following trades, we ensure that the limit order book is expected
to be resilient with respect to the maximum permissible trade executed by the agent.
Finally we present a feasible scheme for unsupervised state discovery, state detection
and online learning for high-frequency quantitative trading agents faced with a multifeatured,
asynchronous market data feed. We provide a technique for enumerating the
state space at the scale at which the agent interacts with the system, incorporating the
effects of a live trading agent on limit order book dynamics into the market data feed,
and hence the perceived state evolution.LG201